By Siu Kei Chung, Clearmatics Technologies
The idea of an identity, a constant unit defining one as a unique entity, seems to be an agreed-upon concept. Universal identity and the ability to uniquely identify any entity across any system identically is what we all strive for. As we begin to make attempts at defining an identity and to encompass the characteristics of an entity we start to find that the boundaries between truth and perception become blurred. Identity starts to become a spectrum, a continuous array of possible descriptions of your subject. The granularity of your definition is called into question.
If I were to be identified where would you start? Suddenly it becomes clear that identity is almost always a description of an entity within a context. Is Bob a hard-working employee or is Alice a trust-worthy friend? Each description may also carry different connotations in different contexts yet we still immediately reach for our vocabulary book without understanding how identities should be defined and for what purpose.
In this short paper I question the notion of a single discrete identity and the continuous nature of identity.
In any interaction between two individuals, whether online or offline, we are subject to the evaluation by the counterparty. Our movements and actions contribute to a mostly subconcious process that determines how we are perceived by the other side. Given an interaction that involves a transaction, this becomes a very conscious process, deeming whether our counterparty will fulfill their end of an agreement or not. A lot of the game theory of interactions in open systems such as social encounters hinges on the correct perception of another's motivations. Thus trust becomes a function of both the perception of an identity and that in the context of the interaction.
What is trust? Given rational participants, two individuals will only participate in a system or interaction to the extent to which they believe their counterparty will not be malicious. Malicious in this context could mean anything interpreted by 'betrayal of trust' but for simplicity we can imagine a simple game of giving money and having it returned to you. If Alice absolutely trusts Bob, she will give him any amount of money and expect it to be returned. If he is as trustworthy, that is what will happen. If Alice does not trust Bob, she will never give him any money as she expects to always be robbed. Most interactions lie in between with a sea of complexities resulting from an perceived aggregate of incentives of each individual by their respective counterparts to calculate the level of trust they have for each other as a measure of how much they will engage in the interaction. Most times they reduce to emotional choices.
Trust therefore becomes an individual-specific measureable extent to which they will participate in an interaction. In more complex scenarios, a simplification of trust would simply be that one individual will only put at risk, in any given interaction, value that is less than what they perceive their counterparty would lose if they were malicious. That becomes the measure of trust.
As conscious individuals we form our own ideas of self. To ourselves we are characterised mostly by our thoughts and the way we perceive them. We observe our own actions and determine a description set that matches that of who we are, or so we believe. We do well to judge ourselves by our intentions as we measure ourselves by our own values and ideals, but we measure others by their actions as they are all that are exhibited. These two poles don't usually coincide and we are usually bad judges of ourselves.
Self-identity is an important component of identity as identities are continuously evolving, however, how we choose to see ourselves makes no impact upon the way the world interfaces with us. Until our actions define our character and provide a window into our psychology, our ideas of self-identity have no tangible effect on us as an identity. Arguably, identity as an isolated concept does not exist and can only exist in the context of other identities and how they interact. In effect, self-identity is simply an isolated idea of an individual's own perception of self and plays no direct part in interactions.
In distributed, decentralised systems designed to become an engine of some function, usually involving people, that may eventually encode entire economies or social structures, identity becomes paramount. The context in which identities operate dictate how the system will function and how incentive mechanisms uphold the correct functioning of the system via governance protocols. All systems that involve people have varying degrees of incentivisation depending on the purpose that the system has and as such each individual's participation is dependent on their on self-interest bounded by the possible benefit in their participation.
In a system of participants who all freely interact, all possess unique perceptions of the identity of each other to varying degrees. Those degrees reduce down depending on the relevance in the interaction between any two individuals which then become measurements in each participant's calculation of how best to interact (to maximise gain; whatever that means). The identity of each entity is never absolute. It becomes the union of all possible perceptions of a single entity which result in an intangible probability of the subject possessing certain traits.
The complexities of unbounded identities produce an impossibility for practical usage. The measurement of perception, too, is an impossible task. How can we encode enough about identity to allow further future systems to be built without the boundaries of current identity models? Incentives and trust are intertwined concepts that dictate how individuals participate in a system.
Will discrete identities suffice for all possible systems? Could we build a more complex system of identities around contextual definitions that take into account an individual's prior involvement in any given system?