The Machines Are Coming For Us – Issue #5

Take Note

Machine learning and artificial intelligence (AI) are frequently used as interchangeable terms, particularly by marketing departments trying to decide which term is more profitable for them. But, just as not all rectangles are squares, we need to understand that not all AI is machine learning. Artificial intelligence is a big domain that includes topics like facial recognition, natural language processing, and machine learning itself — neural networks and statistical modeling techniques that get better with training. The defining characteristic of AI is the ability to approximate human intelligence in one or more areas. In fact, there’s a test for it: the Turing Test. Many applications of AI are elaborate decision trees that could be represented by nothing more than an epic flow chart. Machine learning is more in line with what many of us think of as AI — the general purpose ability to learn with training. Machine learning algorithms are typically unaware of the problem they are solving, but they are trained by humans (or other machines) who know the best way to teach those algorithms. This is not unlike how humans learn.

This month we take a look at what machine learning is and isn’t, take a look at its applications in our daily lives, and discuss our responsibilities going forward.

– Cris


The Machines Are Coming For Us

AI has advanced modestly over the past 50 years, with many of the same algorithms of the 1950s and 1960s still in use today. Machine learning is no exception, except for one key characteristic: Training is expensive, usually dramatically more so than running the algorithms after training. It uses a lot of computational power, and therefore a lot of electricity. Many of the problems machine learning is used for — for example, facial recognition on Facebook and climate change modeling — are also expensive to run after training simply because they’re run billions of times. Throughout the history of computing, demand for processing power has vastly exceeded supply. That trend has changed in the past 10 years, as many of us walk around with more processing power in the phone in our pockets than we have in the laptop on our desk. Machine learning requires a diet of massive amounts of silicon and electricity, and it is freely available now for the first time ever. Old ideas are being used to solve problems with global scale, many of which didn’t actually need a solution. An abundance of computing power is solving amazing problems, causing amusing side effects and chugging electricity faster than any other industry on Earth.

All of this has led to some interesting predictions about the future of machine learning. Sudnar Pinchai (of Google) insists that AI will make us a gentler, kinder planet. Elon Musk (of Tesla, SpaceX, and, well, pretty much everything) insists the machines are coming for us. Seeming to support Elon’s argument is the fact that AI is about to surpass the knowledge and processing ability of the human mind. Like everything else, though, it’s a little more complicated than that. We are distinctly human because of our improvisational skills and our ability to think in abstractions. In other words, a machine that could learn everything we know would not necessarily possess the creativity required to invent a machine smarter than it. Further, like a car, AI needs gas. Lots and lots of gas. And until it can process, refine and transport its own energy, we have the upper hand. Musk’s predictions are probably a little ahead of their time. Then again, he has a track record of making bold, absurd predictions for the future that end up only being slightly wrong. At the end of the day, computers are really just sophisticated rocks. A big rock is what killed the dinosaurs.

We need to keep an eye on this, to say the least. Machine learning is free from the pressures and distractions of modern society. It can be directed toward and apply itself entirely to any problem, finding things that biased, distracted humans, however brilliant, may have missed. Already, we are learning better how to plan for population growth, rotate crops, treat disease, and predict the weather next month. We are now turning to more global problems — life and death, tackling problems like the causes and cures for cancer. This promise of immortality — or at least a better time while we’re still here — will continue to push us down this road. Perhaps the temptation is what makes Elon so confident the machines will ultimately come for us.

For now, we aren’t yet ready to imagine becoming human batteries as in “The Matrix.” We’re ready to enjoy not having to tag people in our photos on Facebook. And we’re ready to cure cancer. But we’re already over people marketing to us with their next-gen machine learning tech.


Siri and Cortana: Not Machine Learning

Today, it seems like everyone is claiming to use machine learning and/or AI in their products and services, but in reality, most of these claims are misleading marketing buzzwords. Many of the ever-growing number of “smart” tools and gadgets on the market right now could be more honestly labeled as “very capable at this specific task.” Smart implies cognition, intelligence, and the ability to learn and adapt.

Even some of our most impressively savvy tools still aren’t smart. An excellent example of this concept is our personal assistants — Siri, Cortana, and Google Now. These are sophisticated text-to-speech algorithms that are programmed to do a variety of things. While they may do some machine learning on the back end (e.g. Siri learns your voice), these are simply programs that respond to voice commands and carry out a predetermined set of tasks. Because these programs don’t get to know you over time, don’t predict your tasks, and don’t improve their performance, they are simply Natural Language Processors (NLP) that default to performing web searches when you make a request outside of their programmed functions.

There are countless examples of machine learning today, but we are still far from having our own R.U.D.I.


5 Examples of Machine Learning Gone Wrong


The Machines Are Learning, But What are We Teaching Them?

Sometimes machines learn bad things, and sometimes we teach machines the wrong things. We talked to David Goodell, our Linux engineer about how we’ve been seeing this play out:

“Machine learning can be an incredibly powerful tool in creating relatively simple solutions to complex problems. You create the framework, you line up your data, and you let algorithms do the heavy lifting. Some problems, however, have an extra layer of complexity that algorithms (and sometimes their creators) aren’t very well-equipped to handle.

Take for example Tay, an artificial intelligence chatbot developed by Microsoft, designed to interact with and learn from Twitter users. Tay spent a riveting 16 hours online before being disabled by Microsoft to put a stop to the bot’s newly-learned racism.

Microsoft blames Tay’s behavior on a “coordinated attack” that “exploited a vulnerability” in Tay’s design. This language choice is noteworthy because it implies safeguards for this sort of event were put in place but ultimately circumvented. These safeguards (when working) are effectively manual interventions in the machine learning process. It’s not terribly uncommon for a learning algorithm to be designed to ignore “bad” data, but in the realm of programming, there’s a pretty significant gulf of difficulty in implementation between “Ignore statistical outliers” and “Don’t listen to racists.” Tay had canned responses for certain hot-topic current events and was designed to avoid some topics of discussion altogether, but these are stopgap fixes for the bigger issue: teaching computers about morality. In the state presented, Tay’s sense of “morality” was a set of guidelines ultimately designed and implemented by developers. A “bug” in Tay’s morality caused outright hate speech. By comparison, blue error screens don’t seem so bad.

Beyond the realm of text, machine learning also took the blame for mobile application FaceApp’s unfortunate “hot” filter. The app uses neural network-backed algorithms to alter your selfies with different filters that can do things like make you appear older or change your expression to a smile. The “hot” filter, purporting to make you more attractive, came under fire after users found it was lightening their skin and altering their facial features in a manner that made them appear more Caucasian. Creator Yaroslav Goncharov apologized publicly, citing “training set bias” in the app’s neural network. It’s difficult to point a finger at the root cause without knowing more about the data set. (Was the “hot” neural network trained exclusively on white faces?) But regardless of what level the initial issue occurred at, the biggest failure happened at the top. I’m willing to give Goncharov the benefit of the doubt that this was entirely unintentional, but the end result is still negligence. Working with a diverse set of users to test this feature before taking it livecould have prevented the racially insensitive feature (and also would have nipped the app’s more recent, equally ill-advised ethnicity filters in the bud). While the initial problem with the “hot” filter could very well have been data, the ultimate fault was undeniably human.

Tay’s problem was negative external influencers, while FaceApp’s can be attributed to a lack of positive internal influence. But the common denominator is the same: Humans need to do better. Artificial intelligence through machine learning can tempt us with the allure of a “Set It and Forget It” means of problem-solving. It is the responsibility of the creators to manage and moderate this process — at all levels — to prevent machines learning from us at our worst.”


Machine Learning Isn’t New

Machine learning has been around for a while, one of the first AI concepts fleshed out in the 1950s. It has stayed behind the scenes in spite of a few high-profile points in time where people started to think that the era of the Jetsons was upon us. Mike Quindazzi’s “The Rise of AI” infograph highlights the peaks and valleys machine learning has gone through over the past 60 years. In spite of the sputtering start, machine learning has been a staple of everyday life since the 1980s. It’s just been hiding in plain sight.

Machine learning is all around us, affecting the food we eat, how mail is delivered to our homes, and how products are priced and placed in stores.

Since the 1980s, the agriculture industry has been using precision agricultureresearch to create decision support systems. Comprised of decision tree learningknowledge-based systems, and more, precision agriculture helps farmers preserve resources while optimizing crops. Farmers can learn about soil compositions of their fields, decide the perfect location to plant to optimize water consumption, identify and treat diseased crops, and even forecast the weather and its impact on potential types of crops.

In 1965, the USPS installed their first high-speed optical character reader (OCR), using optical character recognition to read typed addresses on mailed letters. Today, letters with typed addresses go to a Multiline Optical Character Reader (MLOCR), where the machine reads the ZIP code and address information, determines how to route mail through the postal system, and prints the appropriate bar code onto the envelope. Letters with handwritten addresses go to a Remote Bar Coding System, where more advancedIntelligent Character Recognition machines decipher the address; correct spelling errors, omissions, or conflicts in the written address; and identify the most likely correct address. All of these systems are based on machine learning, and today, these systems are able to train themselves as they process the mail in real time.

The concept of association rules has been used since the 1990s to look at data collected from point-of-sale systems. Association rules spot trends in customer purchases, predict what additional products a customer may want to purchase, and builds rules based on the products purchased. As an example, a person buying baby formula and wipes likely has a baby and is likely to also buy diapers. These rules impact the coupons we receive in the mail, how entire stores are built and where products are located. It even impacts the purchase price of an item. Today, online shopping has only increased the usage of association rules. When you add an item to your online shopping cart, you are almost certainly going to be offered a variety of other items to consider. Amazon even goes so far as to show you what other customers viewed, shopped for, and eventually purchased.

It’s easy to overlook the many invisible processes that power our lives and difficult to fully appreciate the systems that power them. In spite of the many ways machine learning is taking an active role in our lives today, it has been hiding behind the scenes for decades.


In the News

© Copyright 2019 Rhythmic Technologies, Inc. All Rights Reserved.