Security

Chatelaine Security for Smart Buildings and Cities.

 There is a growing recognition that traditional models of cybersecurity are inadequate to secure complex systems, and to detect security flaws in systems of systems. (Many definitions essentially declare complex systems and systems of systems to be equivalent terms.) Full-stack systems are too fragile and ponderous to readily address smart cities, or even smart buildings. The spatial web anticipates unbounded numbers of systems interacting, each with its own technology, and purpose, and even perhaps frame of spatial reference.

 I read this week of applying behavioral economics to cybersecurity and calling the result (and the new book) “Security Chaos Engineering”. It seems aligned with models for security in actor-based systems, which seem to me the only way to develop the IoT at true scale.

 Actors do not know what other actors do in a system. Actors receive messages from other actors in potentially massive concurrent computation systems. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors can only affect each other indirectly through messaging, and cannot know the state of other actors.

 You may recognize this as the fundamental model of cloud-native computing, whether the cloud is in a far away data center, or in many such data centers, or in an on-premises cloud, or in some hybrid of the above. Some name this approach microservices. What makes it work is well defined invariant interfaces, that is, fully defined messages between systems. Any system, then, no matter how large or complex can participate AS IF an actor so long as it limits a specific interaction to these commonly-defined messages.

 Transactive energy is essentially a means to replace an integrated control system with a system of systems, with each economic actor clearly a separate system.

 This last week found me discussing the implications of these approaches for cybersecurity standards. Traditional cybersecurity doctrine, including for cyberphysical systems, assume to much homogeneity of systems, in internal design and in purpose, and thus make assumptions of cybersecurity agent omniscience that is unachievable. In complex systems, it is far easier to influence the system your system relies on than to ever touch your system. Even within a single system, a man-in-the-middle exploit on sensors can more easily be replaced with some chewing gum on the sensor, or with a light tap of a mallet on the sensor, or even remote infrasonic attacks on a sensor array.

 What one can do is monitor the patterns of interactions (messages) between distributed actors, likely with ML (machine learning), and notice that one of the systems is acting less and less as expected. This can work even in a zero-trust environment, in which all the actual messages are encrypted. One still cannot know automatically whether the changes of behavior are due to enemy action, or merely change in programmed motivation.

 Maggie had the misfortune of being on a four-and-a-half-hour drive with me last week, and so endured a long description of this approach and of the week’s conversations. She promptly declared this to be a chatelaine model. Hmmm.

 A chatelaine is the mistress of a large estate (chateau) who may not know everything that is going on, but has all the keys, and notices patterns. There is no butter in the in the bin – is the milk-maid on vacation—did she get married—was she fired—did the cow run dry—do we need a new cow? The chatelaine can notify the owner, and she can invent a plan of action and investigation. I liked it. It is a much more approachable, easier to understand, term than “chaos security engineering”.

 I want a chatelaine in the message fabric.