Understanding and Assessing Risk

America is a country built on risk. When settlers from England came to Jamestown in 1607 no one knew how it would turn out. In fact only 61 out of 500 colonists survived during the “great starvation” from 1609-1610. The risks were well known as several British colonies were failures and abandoned, yet people still continued to come to the New World in hopes of a better life.

Image

One of the primary goals of being an engineer is that you have to find ways to minimize risk, at the same you’re expected to innovative and solve challenging problems. At times these can be two conflicting goals. Can you be innovative and solve problems without taking major risks? Let’s look at IBM, a company inherently built on risks T.J. Watson, Sr., bet the entire company on building tabulating equipment when their was no market for tabulating equipment. But he saw a future need, and that need came when the Social Security Act of 1935 was passed, and IBM was the only company that had the necessary equipment. T.J. Watson, Jr also saw the future and bet the future of the company on computing and spent five billion dollars on building the revolutionary System/360 mainframe. If IBM stayed in a cautious mode and never branched in new emerging areas  IBM would not be the admired company it is today.

The way individuals approach risk can be divided into three categories risk averse, risk inclined and risk neutral. Risk averse individuals have a tendency to shy away from risk, risk inclined individuals are predisposed to taking risks, and risk neutral individuals lay somewhere between the two former categories.

We should ask ourselves larger questions about risks, how many risks should we take and why? A paper that was presented at the International Conference on System Science, titled, “Understand the Effect of Risk Aversion on Risk”, discusses the perils of being risk illiterate.  The paper makes a few key points, first if people are too risk averse then small incidents that have occurred will be overblown leading to hysteria and inflated importance. This occurs because some individuals don’t have the ability to perceive between small and large incidents. Consider the potential failure modes of a server, if one chip in an 8 core processor fails on a single node this does not take down the server, and it is unlikely to cause interruption and can be repaired. If the server were to lose power and take down the entire mainframe then it would be a major failure event. We must not be too careful by over-planning and over-training for specific events, instead we should be focused on determine what are acceptable level of risks for failure of systems at a variety of levels. Should we spend more time focusing on major events that could lead to system failure or should we spend time worrying about a cosmetic defect?

When I think about my own career I’m risk inclined as a young engineer, I think there’s no reason for me not to try to introduce innovative processes if it’s going to lead to improved quality and more efficient manufacturing. In my opinion settling for mediocrity is worse than failing and this sentiment that defines first-rate engineers, scientists, businessman, and investors.

Computer Security: The Ultimate Inside Trader

I went to a talk on computer security. A few interesting case studies were brought up by the lead engineer of the server access and virtualization group at Cisco. Two case studies stuck out to me.

Someone suspected that there was an issue between two electronic trading centers between Asia and the USA, he wouldn’t say exactly where in the US and Asia, or who their client was. Anyway, they did some research in fact they called in their physicists to calculate the curvature of the Earth between the two continents to figure out how fast the data transfer rate should be, then they had their EEs look at the transmission line characteristics. Initially the IT engineer for the trading company told them, that the transfer rate should be 1.5 * the speed of light.

Cisco Engineer: What do you mean that the transfer rate is 1.5 * the speed of light, that’s physically impossible, it defies the laws of physics.
IT engineer: Well, I guess we must have a very robust router.
CE: Your are an idiot.

Eventually, they found that the transactions between one of the servers was being delayed by milliseconds, the packets sent were being slowed down. So, they did some more research, turns out that data connection between the two continents were over a undersea cable (there are a lot of undersea data cables between continent to continent that’s how we transfer data usually, please see attachment). Sidenote: Investment advice buy land in Africa, where the cable interconnect is routed from Asia.

They did more research, and discovered a man-in-the-middle attack, someone actually got access to the undersea cable as it was routed through French Polyonesia in the Pacific Ocean. As a result, the person could intercept the data being transmitted, see what types of trades were being placed (all he needed was those millisecond delays) and have an algorithm to decide whether to buy or sell shares based on what types of trades were being executed. All I have to say for the criminals brilliant, but illegal.

Second thing, power supplies. We often think of computer security in terms of protecting the data inputs and outputs of the computer/system, but what about the power inputs? What happens when Vcc(the input voltage) is not equal to what it’s suppose to be weird stuff starts happening at the logic level. The system might spit out a incorrect calculation, or spill out too much info with over or under voltage. Basically, he said it was very hard to design power supplies that are intolerant to slight signal variations.

In high school physics and basic college electromagnetics, we learn that AC signals are sinusoidal, with some constant amplitude, and when you convert from AC to DC with the use of a transformer and bridge diode rectifier (I’ve attached an oversimplified circuit found on google), you get a constant DC output. As a power professor I had once pointed out, what a fairy tale. For one in real life power is outputted in three phases (not just 1 sinusoidal signal), and two as shown in the plot from wikipedia, you superposition all the waves, then rectify it, no way are you going to get a perfectly constant DC output. You get close, but not close enough. So in short he argued that any input voltage signal variation should be logged no matter how small.

The Limits of Memory

Like many people I attend a lot of meetings, events, and I still attend lecture for classes I’m currently enrolled in at the University of Vermont. In these environments a large volume of information is presented and retaining that information is sometime hard. Scientific research has a good explanation why retaining information is so difficult. A psychological phenomenon known as change blindness dictates that humans can hold onto visual information for about a fractional of a second. For sounds, humans can remember about three seconds worth of information using their auditory loop, a type of memory. [1]

In his book, “Smarter Thinking” Art Markman introduces a concept known as the Role of Three which stipulates that we remember about three distinct and independent pieces of information about an event. He uses a baseball game as an example, when we go to a baseball came (which I coincidental did a few weeks ago) we remember about three things from the game. In my case I remembered the two rain delays, people being excited when the camera panned on them, and the organ player.

Markman gave three tips for making effective presentations, so people remember what you want them to:

  1. Start all presentations with an outline and try to limit the outline to three main items, if you can’t group similar items
  2. During the presentation try and stay focused on the three main items, so people remember the message you are trying to convey
  3. At the end of the presentation, summarize your three key points

[1] Smarter Thinking, Art Markman, Penguin Group 2012