Men have become the tools of their tools.
— Henry David Thoreau
The line that divides human and computing capability is shifting. More than it has thus far.
It may not be helpful to think of computer-based systems as tools — as human augmentation — anymore. We may need to rethink how we think about the computing landscape, and consider rejigging our tools of thinking, notably architecture. This Advisor suggests stretching in that direction so that we are positioned more effectively to meet a qualitatively different future as it charges rapidly toward, and at us.
Toys Will Be Toys …
Even though there have been many advances in computing and its application to human activities over the last couple of decades, they have largely been about scaling: More people now touch some kind of computer directly than ever before, to do more things. An important factor in that scaling has been to make the “touch point” more human-like. Computing capability was applied to make its own interface with humans “natural.” Touches, gestures, looks, and speech have crept into everyday interactions with things that used to be much more aloof and intimidating to humans in the previous cycle of computing’s evolution. The toys have still stayed toys, though, even though the leashes we tug them with fit a little more snugly in our hands.
Interestingly, as we note that we are making improvements in human-computer interfaces, we are subtly nudged into realizing that these interfaces are there only because the two worlds — human and computer — exist separately. Computers do what computers do, and humans do what humans do. Yes, computing has bled into the interface between the two, making the line in between a bit easier to traverse. And yes, Dear Alexa, we will continue to see improvements in this area as we move forward. However, none of these “advances” has accomplished any fundamental change in the division of roles and responsibilities across man and machine; they have not shifted the line between them. Arguably, what we have done over the past couple of decades is merely spread computing’s ability to automate specifiable rules across larger swaths of people.
Our toys largely remain toys, and do what we tell them to do. Nothing more, and nothing less.
… Or Not!
There is mischief afoot, though. In these early days of artificial intelligence and machines that can “learn” using various techniques, we are getting a glimpse of a more qualitative shift than previous computing waves have delivered. Our tools may be encroaching into an area that has thus far mostly eluded them — something along the lines of human cognitive ability. And, as they do so, an uneasy cloud hangs over us. We fret about whether our toys jump out at the stroke of midnight every day, come alive, and dance and prance around. There is just cause for concern because we are unsure what our toys do when we are not watching, or when they work behind the scenes. We have seen signs of misbehavior, as reports of autonomous-machine mishaps reach our ears. Two Boeing 737 Max planes crashed, killing hundreds of people, as man and machine engaged in a fierce battle for control. A chatbot drew on our worst human instincts and put them on steroids.
Horror of horrors! What if the things we are starting to create not only fight with us, but also grow up to be like us?
Can We Put the Toys Back into Pandora’s Box?
As Pandora did, we have unleashed many spirits and sprites, some of which we view with considerable unease. However, we cannot put back the impish toys that are gleefully making a break from their former prison. We may need to take a more pragmatic view, perhaps even tinged with optimism, and look for a way to engage our creations productively, as they begin to inhabit the spaces that were exclusively ours until now. After all, according to the myth, Pandora did close the box later, and managed to retain Hope in there.
Fortunately, we know the old box well. We have all kinds of diagrams, matrices, descriptions, and blueprints of the computer systems that have been created over the last few decades. For, in many enterprises, “architecture” has generally been about the computing and the technical side of things — the toys. One thing, though, is noticeably absent from these artifacts from our past: people. There are boxes, circles, lines, and other geometric oddities, but generally no smiley faces. The occasional smiley face does show up on the edges of a rare architecture diagram, perhaps to humor “users” — the ones that “use” the toys. However, most architecture work has stayed out of the unruly and messy work that human beings do on top of what their more rigid computing tools do.
We Can Expand the Box, Though
We cannot consign the human part of the architecture to the outside of the architecture box anymore. First, computing is creeping into domains that were, until now, monopolized by humans because of the lack of cognitive and learning capabilities in our computers. Therefore, we need to be able to see those parts of the architecture more clearly. Second, as autonomous-computing mishaps such as those in the examples above start to occur, we need ways to build in controls that require the meta-cognitive and value-judgment abilities that only humans possess. Finally, unless we see both rule-based, fully specifiable processes and cognitively demanding processes side by side, it is not easy for us to assign roles and responsibilities appropriately across software and wetware.
So how bereft of people are your architecture blueprints and diagrams? Do you think there is a need to start putting smiley faces into artifacts, and to create new roles and responsibilities for human beings that go beyond a “user” relationship with computers? Or, do you think we can just continue to do architecture the way we have been doing it? Post your comments at the link below, or send email to bprasad@cutter.com.