While recent developments in electronic commerce have fueled a surge in interest around the subject of trust, it is an aspect of human interaction that is as old as civilization itself. One might say that trust is one of the foundations of society.
We typically think of trust as something that spans between two or more humans and provides a basis for their interactions. While this is an accurate characterization of trust, it is not an exclusive one. Recent advances in technology within the past 30 years have significantly changed both the scope and meaning of this paradigm. One of the primary reasons is that the scope and capability of interaction have increased beyond just human interaction. We also need to consider human-to-machine interactions as well as machine-to-machine. Furthermore, these interactions can be chained by way of a conditional policy basis to allow for complex communication profiles that in some instances may not involve the direct participation of a human at all.
This blog analyzes the subject of trust and its close association with other subjects such as risk, assurance, and identity. And I will review the impact that trust has on technology and the dynamics of human interaction. I will begin by looking at trust in the basic definition and context. The blog will then focus on the impact of trust on technology and advanced communication capabilities that have become prevalent in our lives. I hope this blog will offer a better understanding of trust in technology and human society from both a philosophical and practical standpoint.
How do I trust you?
This is the classic question, and one that is hard to quantify. Indeed, the answer may be different for different people. The reason is that some people are simply more ‘trusting’ (the cynical reader might think ‘gullible’) than others. There is also a degree of context, which is closely related to assumed risk on behalf of the trusting party, that comes into play with every decision of trust. If we think about it, the manifestations can quickly become boggling. After all, there is a big difference between trusting your neighbors’ kid to cut your grass versus trusting that same kid to babysit your children. There are certain pieces of additional information that you will typically require to extend your trust into the deeper context. Usually, this additional information will provide you with an extra level of assurance to extend trust to the neighbors’ kid in a babysitting scenario.
So, while the possible manifestations are quite numerous and complex, some common elements are present in every instance. The first point is that trust is always extended based on some level of assurance. The second point is that this relationship between trust and assurance is dependent upon the context of the subject matter on which trust is established. This context will always have an element of risk that is assumed by the extension of trust. This results in a threefold vector relationship, as shown in Figure 1. The diagram attempts to illustrate that the threefold vector is universal and that the subjects of trust (the context of its extension, if you will) fall in relative positions on the trust axis.
Figure 1 – The vector relationship between context, assurance & trust
As Figure 1 illustrates, there is a somewhat linear relationship between the three vectors. It is the subject of trust that provides for the degree of non-linearity. Some subjects are rather relative. For example, I might not be too picky about my lawn, but others might be as sensitive as to rate the level of trust to be in close equivalence to babysitting their kids. Some parents may be so sensitive to the issue of babysitting that they will require a full background check prior to the extension of trust. In other instances, things are more absolute. A good example of this is trusting an individual with the nuclear football. This top-secret attaché covers the instructions, authorization & access codes for the nuclear warheads involved in the defense of the United States of America. For this subject, we assume that the individual is a member of the Department of Defense with a relatively high rank. The attaché has passed the integrity and background checks and psychological stability testing to provide the level of assurance to extend what could be perceived as the ultimate level of trust. Also consider that no single individual has this type of authority; it is a network of individuals that act synchronously according to well-defined procedures. This reduces the possibility of last-minute rogue behavior.
We also need to consider the assurance cycle. For some extensions of trust, a one-time validation of assurance is all that is required. As an example, even the pickiest of yard owners will typically validate someone’s skill just once. After that, there is the assumption that that skill level is appropriate and is unlikely to change. Likewise for babysitting. Seldom will even the most selective parents do a full background check every time the same kid comes over to babysit. Indeed, a lousy mowing job or a questionable event during babysitting might cause the degree of trust to be compromised and thus require revalidation. However, some positions are extremely sensitive, have a huge potential impact, and are non-reversible. A good example is the extension of trust to handle the nuclear football. In this instance, there are several regular security and psychological tests that occur and random spot testing and background checks. This assures that the individual’s integrity as well as those that support the attaché are not in any way compromised.
From what I have described above, we can assume that there are four basic elements of trust:
There are also three basic modes of trust that occur, some of which are deemed more solid than others.First, there is an initial trust, also known as blind trust.Another commonly used term for this is trusting at face value. For example, you need a level of initial trust to get up out of bed to face the world in the morning. The initial trust concept assumes that the world is not outright hostile and that while still a jungle out there, you have trust in your ability to make progress in it. Another good example is that you can pass someone on the sidewalk in most good neighborhoods and have faith that an individual will not try to attack you. Note that this requires a two-way equation; the other individual must have the same perception. Initial trust is a key ingredient and provides the bootstrap for the other two more sophisticated modes. The second mode of associative trust is the extension of trust to someone, or something based on the reference and recommendation of another individual in which you have alreadyestablishedatrustingrelationship.Bothinitialtrustandassociativetrustcanbeclassifiedas temporary states of trust that progress toward the third and last mode of assured trust.
Suppose the aspect of risk is somehow primary to trust. In that case, there is a corresponding value in the level of assurance provided to the individual entity that enters into the trust relationship. Once again, refer to Figure 1 for a depiction of this vector relationship. As the level of risk gets higher in the trust relationship, the level of assurance must, in turn, be sufficient to cover the risk. However, there are more dimensions to consider, including the aspect of reward.
Reward can be considered as a positive dimension of risk. The two exist in opposition. As the ratio of reward to assumed risk becomes higher, it is more likely that an individual will move forward and assume the risk. An individual reduces the risk factor in their own mind when taken in the context of reward. As a result, individuals do things that they would otherwise not ordinarily do, such as clicking on an icon on a questionable web page. When the degree of risk is higher than the potential reward, an individual will likely pass by the opportunity. This relationship is shown in Figure 2. Note that there are two vectors in this diagram. One is the lower risk or liberal risk vector because the expected level of assurance is lower per given equivalency in context. The higher risk vector represents the more conservative risk vector, as stronger expectation of assurance comes with a relatively lower extension of trust. The sinusoidal line in the middle represents the decision vector of the individual or entity. It is represented as such because it could be described as a waveform that is unique to the entity. Some individuals or organizations may be fairly liberal, others may be more conservative, but each one will be sinusoidal in that the decision hinges between perceived potential risk and reward. It is also important to note that the sinusoidal pattern is smaller at the nexus of the graph and increases in relation to the absolute boundary vectors, which illustrate the potential range of decision. Also note, that as risk and reward grow more significant, the sinusoid grows in relation, representing the state of indecision that we typically encounter in high-stakes affairs, where the risk and reward potentials are exceptionally high.
Figure 2 – The relationship of reward and risk in trust
This is common sense to some degree. Few of us would argue this. However, there are a few important points to consider that are pertinent in today’s e-commerce environment. For example, the perception of assumed risk and potential reward can be misleading. What an individual perceives and what is really occurring are two totally different things. Herein lies the root of all scamming and racketeering activities, and the addition of a cyber environment only provides another level of cover for further abstractions between perceptions and truth.
Another important consideration is that assurance (or insurance) can change this relationship. Assurance can decrease the degree of risk assumed and push the individual toward a favorable decision to accept the risk. For example, neither you nor I would purchase a book from an unknown online vendor with no validation and no privacy. The level of risk (placing your credit card number online unprotected) versus the reward (a book – that you must want; otherwise, we wouldn’t have this thought exercise) is simply too high. However, suppose the online vendor is well-known, and your credit card information exists in an offline profile. In that case, the level of risk is minimized, and the purchase becomes a trivial decision almost equivalent to standing in an actual bookstore. Assurance is further enhanced if you have coverage on your credit card for fraudulent activity. The concept is illustrated further in Figure 3, as systems of assurance are put in place, they provide a positive pressure on a given situation. This pressure serves to reduce the perceived (and hopefully actual) degree of risk.
Figure 3 – The positive influence of increased assurance or insurance
We can deduce that providing increased assurance to individuals who participate in e-commerce is a good thing and will produce positive results. However, individuals can be misled. They can be fooled either by the degree of the perceived reward (think fake lotteries and sweepstakes) or by the degree of perceived assurance (anonymous SSL/TLS is the main avenue here). Many scams will try to do both. A good example is a sweepstakes email from a seemingly reputable company with the happy news that you are the winner. You only need to fill in some required information on a supposedly secure web site. You even get the SSL/TLS connection with the lock icon at the bottom of the browser screen. So, assurance is a two-edged sword. The basic ingredients of a scam will promise a big reward and provide the illusion of assurance.
The ingenious but nefarious use of software code that can provide the ability to place keystroke loggers, bots, trojans, and ransomware on a users’ PC can be triggered by merely visiting a web page or clicking on a link or attachment in an email. A moment of indiscretion by a user, dazzled momentarily by the perception of some great potential reward. The code does the rest of the damage. Once the code is resident, all sorts of information can be garnered from the compromised system. With this approach, there is no need to dupe the user into entering anything online. The malignant party need only wait for the scheduled updates from its cyber-minion.
So, what is a user to do? It seems that we are going back in a cyber sense to the days immediately following the fall of the Roman Empire or in the days of the Old West where your very survival often depended on the whims of the environment. Interestingly, there are many analogies about the Internet and the Old West. However, we are now at a point in evolution where the analogy to the time following the Roman Empire (the Dark Ages) may be more appropriate. Many of the malicious parties are no longer just college kids or folks looking for a quick buck. As systems automation has become more prevalent, many malicious activities are being sourced against infrastructure. Some of these venomous activities can even be traced back to national, religious, or political interests.
Using the Dark Ages analogy, you might view the typical enterprise or organization as a feudal kingdom behind solidly defended borders of rock and earth. An enterprise does its business via various methods to securely provide access across its defenses from these ramparts. Examples are roadways, with armies to travel on them and ships to enforce power remotely when necessary. As we carry the analogy further, the single Internet user is like a peasant in a mud hut outside the border. Their defense is only as good as the probability of contact with malicious forces. They may run anti-virus software and have security check updates, but the real bottom line is that there is always an edge with malware, just as there is always a marginal advantage in weapons versus defense. Suppose the user is careless with email or frequenting contaminated websites, then it is only a matter of time before they contract something that neither the security checks nor the anti-virus software recognizes, and it is too late. If you were living in a mud hut in the Dark Ages, you were at very similar odds. If no one came along, you were fine (the analogy here is that your software is up to date and recognizes the threat). However, most often the rest of your defenses were paltry in comparison to those who might threaten you.
So, what does this all mean?
A while ago a friend of mine purchased a Tesla with self-driving capabilities; he let me sit in the driver’s seat while the car took us to a nearby park. The trip started in an urban area with a few traffic lights and light traffic. The car performed exceptionally well. Braked and turned smoothly and even used the turn signals. There was also a display that showed where the car was en route. I must admit that I felt a weird combination of exhilaration and alarm. It was a very weird feeling. But the trip went smoothly, and then I drove back to the gym, which was our manual starting point to feel how the car handled. Not long after that, there was a news article about a pedestrian struck in California by a self-driving vehicle. Would I trust a self-driving vehicle now? The answer is probably yes, but I would be discretionary on its use, and watch it very closely. I would use it only on open country roads and drive manually in congested urban areas. The reality is that eventually, the car will do better than I will be able to do. As the technology evolves, the technology will be far more accurate and immune to driver fatigue. But I think that this day has not yet arrived.
Another example is automated public transit. Many trains & subways have very little if any human involvement. Then there is air travel, where there is usually automated control of the aircraft. We need trust in these systems. What degree of assurance do we have for our safety and the safety of others? Smart grid technologies, intelligent water distribution, and other smart community technologies, namely the Internet of Things (IoT) and Operational Technology (OT) need to be questioned. Given that these are critical systems for human welfare, it only makes sense that trust in them is of imperative importance.
In my last blog, I wrote in detail on Blockchain technology and its relevance in providing a higher degree of assurance to the data, which, as we have pointed out, is a major component in the equation of trust. It also provides the foundation for smart contracts and digital government with immutable records and history. Digital currency such as Bitcoin has provided both the ‘proof of concept’ and a valuable method of representing and using wealth in today’s digital world as well.
Finally, we need to realize that civil society is a thin veneer on deeper human traits that are not desirable. We need to be able to trust our media and our government, as well as our neighbors. After all, trust is an essential human trait, and we must not lose sight of that. We also need to understand our obligation regarding dependence on technology and systems that support us. Zero Trust networking has shown quite a bit of merit in this area. There is an irony in the very name of Zero Trust. In order for us to trust the systems, the systems ideally should operate in a zero trust environment. Food for thought.
I also think that it is important that we begin to study technology’s impact on human psychology. We must come to grips with the fact that technology has changed us and will continue to do so. It is better to be aware of it than to ignore it. It may be one of the most important things that we can do as a species, not only for ourselves but for our societies and our descendants.