Can We Trust the Future? Part 2

Machine-level trust
In Part 1 of this two-part series (Can We Trust the Future?), we discussed the definition and brief history of trust and how it enabled societal evolution. We then postulated the “Rise of the Machines,” whereby our future societal evolution will heavily rely upon devices that will inevitably need to trust each other, just as we have done as humans. This all leads to the question of what is machine-level trust.

When you think about, it is very similar to human trust. It distills down to clear identity. For humans we have developed our clear identity in our signatures, fingerprints, and visual memory imprints with each other (i.e., the gray matter video recorder we all hold within our craniums). We authenticate each other predominantly through some variation and/or combination of all of these. Now ask yourself what mechanisms do machines posses to do the same thing. You will find, amazingly, that there are very few foundational mechanisms available to machines today to authenticate themselves in a similar fashion.

Today’s untrusted devices: a hacker’s clone army
One of the key reasons such a myriad of cyber miscreants wander cyberspace with abandon is because they can easily propagate their ills from machine to machine without any fear of an identity-based authentication trust mechanism. This makes each machine effectively a clone that can easily replicate its ills to all other clones. The very term “computer virus” fits so well because it so similar to a human virus that attaches itself to a common (i.e., cloned) cellular structure within a human. The same can be said of botnets, malware, and so many of today’s cyber-ills. Imagine now if such a virus could no longer find a common structure upon which to attach and replicate itself.

So, your smartphone, tablet, and many other devices are considered by most organizations as “untrusted devices” unless of course they have “authenticated” them. But really, what does that mean? In light of a foundational mechanism to unquestionably identify the device, how can an organization truly authenticate it? It would be similar to authenticating a driver at the DMV when later you find out that the applicant was one of, say, 10 million cloned humans who were identical in every way (including DNA and fingerprints). Now, which one of those 10 million clones was the applicant that you authenticated? Makes for a tough authentication problem now, doesn’t it? Well, that’s the world we live in today. I will go so far as to say that all devices, unless under 24/7 surveillance, have the potential to violate their authentication and are, hence, all untrusted.

Current state of machine-level trust
So, with that said, there are plenty of solutions offered in the marketplace to authenticate users via these unauthenticated devices. Examples of this are fingerprint readers and even the basic username and password — all authentication mechanisms that authenticate the human using the device.

This leaves us with a false sense of security. All we are really doing is attempting to substitute human-level authentication through a machine proxy. For the most part, this works in many situations, but, as discussed above, it does not really solve a machine-level authentication problem. Now, as more and more machines become critical to our daily lives, and they rely more and more upon communications with each other without a human in the loop, then we do have a potential problem brewing with regard to how they will trust each other.

Using the coming electronic revolution in automobiles — which will include everything from the current fully electric motor-powered to completely networked autonomous vehicle trains — is a good place to think about machine-level trust. Picture riding in your shining new electric car that will automatically drive itself down the highway in sync with all the other cars in its lane. What happens if the machine-level trust is violated by one of these vehicles? Hope it’s not your car.

Future technologies showing promise
So, what is needed to help in our continued evolution of trust extended to machines is a mechanism for machines to have clear, unquestionable identity just as humans do. A technology that is showing great promise in this regard is based on physically unclonable functions (PUFs).

PUFs use the random parametric variations in electrical properties of integrated circuits to differentiate one chip from another, and are designed to be impervious to duplication or prediction, even by the manufacturer. While these random parametric variations are normal and maintained within process control limits and are impossible to effectively manipulate or eliminate, they can be measured. PUFs are effectively an electronic fingerprint of a silicon die that cannot be cloned, spoofed, or easily exploited. PUFs show the promise of providing a machine-level trust foundation upon which true device-level authentication can take place.

Related post:

13 comments on “Can We Trust the Future? Part 2

  1. Mr. Roques
    April 29, 2013

    How can it be impossible to clone? If a fingerprint can be replicated, what stops a hacker? I'm sure it'll make it a lot more difficult but impossible? I doubt it.

  2. Taimoor Zubar
    April 29, 2013

    @Mr. Roques: Even your DNA can be cloned – let alone fingerprints. The more sophisticated the technology is getting, the more loopholes there seem to be developing. I don't think there can ever be a time when you can be 100% reliant on technology for security.

  3. itguyphil
    April 29, 2013

    security in itself will never be truly “safe”.

    Including technology in the equation adds no safety net

  4. Houngbo_Hospice
    April 29, 2013

    @taimoorz- No technology is 100% secure, but not everything can easily be cracked. Even If DNA can be cloned, biometric information is nonetheless unique to each individual.

  5. Adeniji Kayode
    April 29, 2013


    I agree with you on that.When it comes to security, It seems we make them to break them later.

    There can never be any thing like  “Secured for Life” –

  6. prabhakar_deosthali
    April 30, 2013

    Even if we are able to create a unique identity for a chip , the software that reads this identity and authenticates it , can easily be compromised by a virus, unless there is another hardware device ( again a unique) which does this authentication.

    The issue of security is much more complex than we think of.

  7. HM
    April 30, 2013

    This reminds me of the movie “The Burning Train”, when the train was going in auto mode and it was almost impssible to stop it and how much damage it did. Of course it was a movie. But humans should be ready for the damage it can create if the devices go out of control.

  8. Houngbo_Hospice
    April 30, 2013


    “There can never be any thing like “Secured for Life” -“

    Agreed! But we should not be paraoid either – always heavily influenced by anxiety or fear that we will never live in a safe world. Life is about balancing between comfort and risk. 

  9. Eric Sivertson
    April 30, 2013


    Good questions and healthy skepticism.  But, this type of technology is truly “uncloneable,” for the forsesable future.  We are relying on microscopic silicon die manufacturing variations.  These variations are truly at the atomic level and it would be cost prohibitive using today's silicon chip manufacturing processes to control things on an atom by atom basis.  A state of the art silicon manufacturing facility costs somewhere in the neighborhood of $1-10 Billion dollars.  It is rumored that TSMC (one of the world's largest semiconductor foundries) spend close to $10Billion on its current 300mm wafer fab. 

    Take a look at the following two (2) pictures. Picture 1 is of a Intel Pentium Chip at 1x level.  Picture 2 is of the metal interconnects on a chip using an electron-microscope to greatly enlarge it.


    Picutre 1 – Intel Pentium
    Intel Pentium
    Picture 2 – Micrograph of Interconnect on a Chip
    Electronmicrograph of Chip Interconnect

    As you can see, the first picture is very uniform to the eye.  But in the second picture you can clearly see the less than uniform nature of the circuit connections.  These variations that result from the silicon die manufacturing process are what a PUF leverages.  This non-uniformity is natural for the chip design and does not affect overall chip performance.  To strive for perfect uniformity at the microscopic level would require almost an atom by atom placement which would make mass production cost prohibitive for semiconductor manufacturers.  A PUF effectively measures these variations and depending on the PUF type will yield a very unique measurement from chip to chip.  The commercially available PUF technologies have extensive data to show this uniqueness and randomness over large populations of chips.

    So, in the long run there might be new technological advances to change how chips are made to a more uniform process, but that does not appear to be anytime soon.  Until then, a PUF gives you a very low cost, effective way to build a Trust foundation on a silicon die-by-die basis that will be truly unclonable.

    Thanks for you comments and feedback.



  10. Eric Sivertson
    April 30, 2013


    Very good point on “biometric information is nonetheless unique to each individual.”  That is the real idea of a PUF.  It is effectively a “biometric” for a non-biological entity (a silicon chip).

  11. Eric Sivertson
    April 30, 2013


    That is exactly my point and why I think we need to be more proactive on these things.  Unfortunately humans have a funny tendency to ignore prevention and then pile on cure.  As the old saying goes, an ounce of prevention is worth more than a pound of cure.  PUF technology has the promise to be that ounce of prevention.  The key now is will we adopt something like this or wait till we have more problems.

    Thanks for you comments.

  12. Eric Sivertson
    April 30, 2013


    Security, when done right, is both complex and simple.  Many of today's security breaches are actually taking advantage of the complexity of the cures.  When you examine the nature of the attacks you will see that many are looking at the foundational designs and using them for replication and exploitation. 

    If you start to bind software to hardware, whereby the software can only operate on the authorized device, then the nature of the attacks becomes much more challenging and less cost effective.  Todays attacks have a very easy mechanism for replication. If you remove that mechanism, the cost for the attack will go up greatly for the perpertrator.  This will stop many of the current attacks. In fact, the “computer virus” must have a common foundation for replication.  Using a PUF as a Trust foundation will remove this common foundation thereby forcing significant changes to the computer virus model.

    But I agree with you that it is a complex problem and one should never understimate the lengths someone will go to mount an attack.  But, increasing the cost for the attacker has historically reduced the level and intensity.

  13. Adeniji Kayode
    April 30, 2013


    I agree with you on that.. Taking riss brought us this far. All our inventions and discoveries are as a result of somebody or somebody taking risk(s) .

    So taking risk is part of our daily lives

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.