By: Wedge Greene
One Friday on a dark and stormy morning, the young woman entered the clinic procedure room and took the seat that the lab tech directed. She wasn’t overtly nervous, but did have a background level of concern. Not so much about her health, but about the inconvenience if an issue was found in her patch micro insulin pump. Not that she expected a problem; yet while the filler cartridges were easy to self-service (she switched hers every Wednesday morning), the bi-annual maintenance was not routine. As the technician logged in the security code and began to read the diagnostic information, she slumped in the chair, dropping her book. The technician looked over, quickly rose to check her pulse and then hit the red emergency button by the door. She was a lucky woman; the pump failure that delivered, all at once, her remaining 5 days of insulin occurred in the clinic.
She received the care necessary to survive.
Friday afternoon, the VP of public relations for the seller of the pump opened an email from a known colleague. Subsequent inspection showed the email address was counterfeit. Within the email was their accident incident report for the young lady. Accompanying this was the diagnostic data from that pump and a log of the commands that had triggered the abnormal release of insulin; it included the internet address of the specific bot that had inserted the attack in the young lady’s phone, from there to piggy back on the clinic’s system. A polite request for a $100 million ‘security consulting fee’ to be transferred to an offshore account was balanced by a simple statement that 17% of their customer base was compromised and not delivering insulin; instead, signals were being blocked from the co-deployed monitors of blood sugar levels. Further, the contents of the email would be released to the public, to coincide with bot calls to phones of their compromised customers, if payment was not received by end of banking day.
Analysis by the manufacturer’s team found that the specific event of the pump hack had occurred when the recessed button installed for enabling short term wireless communication from diagnostic center to the device had been pushed by the technician and then accompanied by entry of the manufacturer’s decryption key. The manual time-limiting radio enable button had been installed into the pump system design in the pre-release security design audit of their system. It did limit the zone where an outside hack could occur. But the young lady’s infected phone was standing by, waiting for this event. This hack required special circumstances. It could be prevented going forward, but there was no assurance that existing devices had not been compromised in prior maintenance cycles.[1]
Finance ran projected loss scenarios based on ‘private recall and correction’ vs ‘exposure of the vulnerability’. They were currently in zero-day stage of vulnerability with only a single threat vector. Release of the bot nets and code at large was the worst case scenario. Calculation of risk (vulnerability vs potential losses) was no longer abstract. Disclosure of security weakness typically removes 10% of a stock’s valuation. Recommendation to pay the ransom was made, gaining time for a still costly but manageable private recall. But they still put in a longshot: calling our (hypothetical) security Private Investigator (PI).