Automation may take our jobs—but it’ll restore our humanity (says CEO)

“Our very human future

One implication of all this is that for humans to succeed in the AI-powered future, we need to double down on our humanity. Technical skills will no doubt remain important in the future of work, but as AI allows us to automate repetitive tasks across many industries, these will in many cases take a back seat to soft skills. Communication, emotional intelligence, creativity, critical thinking, collaboration, and cognitive flexibility will become the most sought-after abilities. To prepare for that future, we need to emphasize developing higher-order thinking and emotional skills.

While our formal education system catches up to the shifting definition of human intelligence, here are three basic ideas for improving your prospects in the future of work.

Learn to tell stories. Machines aren’t very good at storytelling beyond rote reports. Telling engaging and creative stories is essential if you want to collaborate effectively with other humans. It can improve your communications in many ways—from reframing a product feature to a customer to selling a new internal KPI for how you measure success. A workshop from an organization like The Story Studio is a great place to start.
Boost your creativity. A lot of people think creativity can’t be learned; you either have it, or you don’t. But that’s not true. Creativity is a process and you can ignite that process and improve your chances of creative results. For example, taking regular, reflective breaks, going for walks, and making time for unstructured play (yes, even for adults!) have been shown to boost creativity.
Learn how to sell. Selling is an inherently human trait, and it’s an incredibly important one. I’m not just talking about selling products, but also how to sell yourself, your ideas, and convincing others to get on board with you. Mastering the basic concepts of sales involves a whole lot of very human qualities: understanding psychology, listening and asking questions, empathizing with others, and finding creative solutions to problems.”

Automation may take our jobs—but it’ll restore our humanity
via Instapaper

How Technology Might Get Out of Control (about the Nash equilibrium's demise?)

“People use laws, social norms and international agreements to reap the benefits of technology while minimizing undesirable things like environmental damage. In aiming to find such rules of behavior, we often take inspiration from what game theorists call a Nash equilibrium, named after the mathematician and economist John Nash. In game theory, a Nash equilibrium is a set of strategies that, once discovered by a set of players, provides a stable fixed point at which no one has an incentive to depart from their current strategy.

To reach such an equilibrium, the players need to understand the consequences of their own and others' potential actions. During the Cold War, for example, peace among nuclear powers depended on the understanding the any attack would ensure everyone's destruction. Similarly, from local regulations to international law, negotiations can be seen as a gradual exploration of all possible moves to find a stable framework of rules acceptable to everyone, and giving no one an incentive to cheat – because doing so would leave them worse off.

But what if technology becomes so complex and starts evolving so rapidly that humans can’t imagine the consequences of some new action? This is the question that a pair of scientists -- Dimitri Kusnezov of the National Nuclear Security Administration and Wendell Jones, recently retired from Sandia National Labs -- explore in a recent paper. Their unsettling conclusion: The concept of strategic equilibrium as an organizing principle may be nearly obsolete.”

How Technology Might Get Out of Control
via Instapaper

Things to Hang on Your Mental Mug Tree | - some great morsels from Rory Sutherland

“There's a rather lovely company in the UK that pays people who are housebound—whether for medical reasons, or are caregivers—to handwrite envelopes and letters. You could regard this as a very silly thing to do, but in costly signaling theory terms, it makes perfect sense. The open rate of these letters, and the response they generate, is an order of magnitude higher than for laser-printed letters.

Another thing worth bearing in mind is countersignaling, which, unlike signaling, seems to be uniquely human. There aren't cases of peacocks who demonstrate their extraordinary genetic quality by having really shitty tails. What seems to happen with humans is you have multiple parallel status currencies, and quite often you will signal your position on status by adopting none of the status currencies of the class immediately below your own, or by essentially demonstrating zero effort in standard status currencies. An unwashed bass guitarist in a cool rock band, for example, can get away with poor levels of hygiene, which signals: "I'm so sexy by dint of my bass guitar playing skills that I can get away with not making an effort in any of these conventional areas." Sometimes it's done as a positional thing, and sometimes it's done as a pure demonstration of handicap.

Relevance theory [from Dan Sperber and Deirdre Wilson] might be another thing that's interesting. In other words, replacing the “conduit” idea of communication with this idea that we communicate the minimum necessary for the recipient to recreate the message within their own head using context as a very large part of the information. Those interesting new theories of communication, which don't always sit with the Claude Shannon theories, are worth exploring. A very simple manifestation would be jokes which, like IKEA furniture, demand some self-assembly on the part of the recipient.”

Things to Hang on Your Mental Mug Tree |
via Instapaper

The key to jobs in the future is not college but compassion – Read This

“the truth is, only a tiny percentage of people in the post-industrial world will ever end up working in software engineering, biotechnology or advanced manufacturing. Just as the behemoth machines of the industrial revolution made physical strength less necessary for humans, the information revolution frees us to complement, rather than compete with, the technical competence of computers. Many of the most important jobs of the future will require soft skills, not advanced algebra.”

The key to jobs in the future is not college but compassion – Livia Gershon | Aeon Essays
via Instapaper

This is how Big Oil will die – NewCo Shift (must read)

“It’s 2025, and 800,000 tons of used high strength steel is coming up for auction.

The steel made up the Keystone XL pipeline, finally completed in 2019, two years after the project launched with great fanfare after approval by the Trump administration. The pipeline was built at a cost of about $7 billion, bringing oil from the Canadian tar sands to the US, with a pit stop in the town of Baker, Montana, to pick up US crude from the Bakken formation. At its peak, it carried over 500,000 barrels a day for processing at refineries in Texas and Louisiana.

But in 2025, no one wants the oil.

The Keystone XL will go down as the world’s last great fossil fuels infrastructure project. TransCanada, the pipeline’s operator, charged about $10 per barrel for the transportation services, which means the pipeline extension earned about $5 million per day, or $1.8 billion per year. But after shutting down less than four years into its expected 40 year operational life, it never paid back its costs.”

This is how Big Oil will die – NewCo Shift
via Instapaper

Have Smartphones Destroyed a Generation? - The Atlantic

“a generation shaped by the smartphone and by the concomitant rise of social media. I call them iGen. Born between 1995 and 2012, members of this generation are growing up with smartphones, have an Instagram account before they start high school, and do not remember a time before the internet. The Millennials grew up with the web as well, but it wasn’t ever-present in their lives, at hand at all times, day and night. iGen’s oldest members were early adolescents when the iPhone was introduced, in 2007, and high-school students when the iPad entered the scene, in 2010. A 2017 survey of more than 5,000 American teens found that three out of four owned an iPhone.

The advent of the smartphone and its cousin the tablet was followed quickly by hand-wringing about the deleterious effects of “screen time.” But the impact of these devices has not been fully appreciated, and goes far beyond the usual concerns about curtailed attention spans. The arrival of the smartphone has radically changed every aspect of teenagers’ lives, from the nature of their social interactions to their mental health. These changes have affected young people in every corner of the nation and in every type of household. The trends appear among teens poor and rich; of every ethnic background; in cities, suburbs, and small towns. Where there are cell towers, there are teens living their lives on their smartphone.

To those of us who fondly recall a more analog adolescence, this may seem foreign and troubling. The aim of generational study, however, is not to succumb to nostalgia for the way things used to be; it’s to understand how they are now. Some generational changes are positive, some are negative, and many are both. More comfortable in their bedrooms than in a car or at a party, today’s teens are physically safer than teens have ever been. They’re markedly less likely to get into a car accident and, having less of a taste for alcohol than their predecessors, are less susceptible to drinking’s attendant ills.

Psychologically, however, they are more vulnerable than Millennials were: Rates of teen depression and suicide have skyrocketed since 2011. It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.”

Have Smartphones Destroyed a Generation? - The Atlantic
via Instapaper

Human ingenuity will be the genesis for IoT prosperity

“As business leaders, we must think beyond the fiscal bottom line and technological advances in products and services and ask ourselves, how will IoT affect the communities we operate in and whatwill our role be in readying society and the workforce for this digital phenomenon that is rapidly proliferating? Technology itself has no ethics. It is only when people apply purpose and innovative thinking beyond revenue and profit that we will be able to reap collective benefits and security of the digital world.

We explored this topic in depth at the recent IoT World Forum in London, where renowned futurist Gerd Leonhard provided us a stunning window into the ethics of IoT and the critical role of human ingenuity in designing and shepherding its outcomes. (Watch the replay of Gerd’s keynote, moderated by Cisco’s CMO, Karen Walker: “Beyond Business: A Holistic View of the Societal and Human Impact of IoT.”)

As the IoT World Forum team put its agenda together for an influential community of C-suite executives in London, there was a realization that we needed to address this topic, as provocative (and sobering) as it might be. We recognized that we had to acknowledge the “elephant in the room”: that we are in unchartered territory, as we enter into this new era of exponential change together. When we think about what the implications are of a rapid surge in IoT innovation, we must all collectively consider the potential effects on the geopolitical and global economic landscape (in both advanced and developing nations); on global challenges such as wealth inequality, aging populations, healthcare, and the environment; and on the global workforce. Of course, no one has all the answers, but we must be bold in exploring these issues as a global business community. I will explore this in more depth in my next blog, but I will say that we know we need a global unified approach to succeed. No one can go it alone, and a “head in the sand” mentality is not an option.”

Human ingenuity will be the genesis for IoT prosperity
via Instapaper

Made me think: the End of humanity as we know it's ‘coming in 2045’ and Google is preparing for it

“What is the singularity?

In maths/physics, the singularity is the point at which a function takes an infinite value because it’s incomprehensibly large.
The technological singularity, as it called, is the moment when artificial intelligence takes off into ‘artificial superintelligence’ and becomes exponentially more intelligent more quickly.

As self-improvement becomes more efficient, it would get quicker and quicker at improvement until the machine became infinitely more intelligent infinitely quickly.”

End of humanity as we know it's ‘coming in 2045’ and Google is preparing for it
via Instapaper

Martin Seligman: We Aren’t Built to Live in the Moment (why we are all futurists)

“What best distinguishes our species is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. It usually lifts our spirits, but it’s also the source of most depression and anxiety, whether we’re evaluating our own lives or worrying about the nation. Other animals have springtime rituals for educating the young, but only we subject them to “commencement” speeches grandly informing them that today is the first day of the rest of their lives.

A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain, as psychologists and neuroscientists have discovered — rather belatedly, because for the past century most researchers have assumed that we’re prisoners of the past and the present.”

Opinion | We Aren’t Built to Live in the Moment
via Instapaper

The Real Risks of AI – NewCo Shift

“AGI can be something that we have virtually no understanding or recognition of, but which may have a significant understanding of us if it is given access to the Internet or a significant data repository.

Such a lack of mutual understanding is where all of the real risks reside. This is what should be talking about when we talk about worst case scenarios. We erroneously assume that we will be able to recognize AGI as such.”

The Real Risks of AI – NewCo Shift
via Instapaper