Tag: Technology

Why are there so few women in tech? The truth behind the Google memo

An engineer at the company has suggested male domination of Silicon Valley is down to biological differences between the sexes. But the root causes are much more complicated

It is time to be open about the science of human nature. This was the assertion of software engineer James Damore to his colleagues at Google, in an internal memo that has since led to his sacking. Im simply stating, Damore wrote, that the distribution of preferences and abilities of men and women differ in part due to biological causes and that these differences may explain why we dont see equal representation of women in tech and leadership. He went on to imply that womens stronger interest in people and neuroticism might make them less naturally suited to being coders at Google.

The companys leadership viewed the matter differently, firing Damore and sparing his female colleagues the need to prove their biological aptitude for working with computers.

Sacking one errant employee doesnt alter an awkward fact, though. Only 20% of Google engineers are women a statistic that is matched roughly across big tech companies. So, does Damore have a point? Is there an underlying biological explanation for why so few women work at a company that prides itself on its progressive ideals and family-friendly ethos?

There are countless scientific studies that claim to identify differences between male and female cognitive aptitudes and, in the UK, far fewer girls choose to study computer science at GCSE level (20% of the total number of students), at degree level (16%) and beyond. There is something seductive about the idea that professional success springs from our innate abilities, rather than the degree to which society tips the odds in our favour.

After the contents of the memo became public, through a leak to tech site Gizmodo, the scientific argument for innate biological differences quickly found favour with some tech insiders, albeit those writing anonymously on sites such as Hacker News and the gossip app Blind.

Students
Students at the Indian Institute of Management Lucknow. Far more women study computing in India than in the UK. Photograph: Hindustan Times/Getty Images

On Blind which requires users to prove who they work for before posting one Google employee wrote: Can we go back to the time when Silicon Valley were [sic] about nerds and geeks, thats why I applied [to] Google and came to the US. I mean this industry used to be a safe place for people like us, why so fking complicated now. I used to dislike conservatives until I started working in tech, wrote another. Now I sympathise with them due to the hostility and groupthink, as well as the fact that they are the only ones standing up for classical liberal values.

While the biological hypothesis seems to appeal to some tech workers, the notion that Silicon Valleys gender gap can be explained away by such factors is questionable. Prof Dame Wendy Hall, a director of the Web Science Institute at the University of Southampton, points to the wide variation in gender ratios in computing internationally, which she argues would not be seen if there were a universal biological difference in ability between the sexes. While only 16% of computer science undergraduates in the UK and a similar proportion in the US are female, the balance is different in India, Malaysia and Nigeria.

I walk into a classroom in India and its more than 50% girls, the same in Malaysia, says Hall. They are so passionate about coding, Lots of women love coding. There just arent these gender differences there.

In fact, in the west, female participation in computer science has plunged since the mid-80s, while female participation in medicine and other scientific fields has increased steadily.

Over the past decade, even with a number of initiatives being set up to boost girls participation in coding and computer science, the proportion of female computer science undergraduates has continued to fall 10 years ago, the proportion was 19% of the UK total.

Hall believes that the gender gap and the male computer geek stereotype can be dated back to the advent of the home computer in the early 80s, when the machines were marketed heavily as gaming systems for men. She suspects this might be more culpable for womens low participation than men having evolved a mindset better suited to writing lines of code.

Women were turned off computing in the 80s, she says. Computers were sold as toys for the boys. Somehow that cultural stigma has stuck in the west in a way that we cant get rid of and its just getting worse. The skills gap is going to get huge.

Jane Margolis, a psychologist at the University of California, Los Angeles, agrees. Margolis interviewed hundreds of computer science students in the 90s at Carnegie Mellon University, which had one of the top programmes in the country at the time.

Many of the women at Carnegie Mellon talked about computers being in [their brothers] bedroom and there were a lot of father-son internships around the computer that werent happening with the girls, she says. There was a cultural assumption that the norms of being in computer science were that you would do it 24/7, were obsessed with it, wanted nothing in your life but computers and that was very much associated with male adolescents, she added. It was very much based around a male norm. Females were made to think that, if they didnt dream in code and if it wasnt their full obsession, they didnt belong or were not capable of being in the field.

Former
Former Tinder vice-president Whitney Wolfe, who sued the company over atrocious misogyny in 2014. Photograph: Jeff Wilson for the Observer

Prof Gina Rippon, a neuroscientist at Aston University in Birmingham, has studied extensively cognitive differences between men and women. She says that, while Damore pointed to scientific evidence for men and women having different aptitudes and personality traits, he seemed to miss the point that, even if there were well-established sex differences at any level, theyre always very tiny. Certainly not enough to explain the gender ratios of Google programmers even if you didnt want to get into the nitty-gritty of arguing about the science.

Rippons work suggests that, in many cases, the differences between male and female performance, if present, are very small, can disappear with training and are not consistent across cultures.

In one study, Rippon found that British men performed significantly better on a spatial rotation task than women. However, when the experiment was repeated with Chinese participants, there was no difference between the male and female participants. Other similar studies have found that gender differences in spatial rotation tasks disappeared when the researchers controlled for video game experience. Rippon points to another study, which showed that differences in personality traits between men and women varied wildly across countries, depending on the status of women in that society.

So, Damores suggestion that women are more prone to anxiety does not imply that this difference is a function of hormones or hardwiring of the brain. Plus, there is compelling evidence that unconscious biases have a powerful effect on what people expect themselves to be good at and how they perform. For instance, girls tend to score worse on a test if they are told their maths skills are being assessed than when they are told they are taking part in a study investigating how people solve problems.

Even assuming that there are fundamental differences between male and female cognition and personality, there is no clear, logical line between such findings in a laboratory setting and performance in the workplace.

Priya Guha, the UK lead of tech incubator RocketSpace and a former UK consul general in San Francisco, argues that, even by its own arguments, Damores memo missed the point. The description of an engineer as somebody who has their head down, focused on developing the next line of code, is the sort of engineer that wont be adding value, she says. We need engineers out there who are both very strong developers, but also people who understand the world around them and are comfortable interacting with society. So, by that description, women would be better engineers even by the stereotypes he proposes.

Unfortunately, many such multiskilled people are likely to be deterred by the perception of hostility engendered by claims like Damores. We have a historical challenge to encourage girls, let alone women, into careers such as engineering, which then creates an imbalance in the people who enter tech industries overall, says Guha. Tech has a particular problem in this area. Wherever there are instances of people creating a hostile environment, companies need to stamp that out quickly. His dismissal sends a really powerful message: the environment in these companies needs to be thought about to ensure that it improves day by day.

But Eileen Burbidge, a partner at venture capital firm Passion Capital, argues that tech does not have a significantly worse gender gap than other high-pressure industries such as finance or the media. I think it comes down to cultural norms and female representation in general, Burbidge says. It is what affects the rest of the business world: its around the same time that women start thinking about having families that they think about the opportunity cost of staying in a work environment, and if its not positive or they get negative influences its going to affect their decisions.

She argues that, in many ways, tech is better placed than most large industries to tackle its gender gaps. I dont think theres anything specific that needs to be done for technology: I think the tech sector is more introspective and likes to think of itself as more progressive, so remedies that work for other sectors will help here, too, she says.

In Stem [science, technology, engineering and mathematics] in particular, were seeing the tech industry trying to be more proactive about outreach. The industry is trying to have this discussion a lot companies dont always follow what they say, but they say it, at least.

Computer
Computing is too important to be left to men … the late computer scientist Karen Sprck Jones. Photograph: Cambridge University

Peter Daly, an associate in the employment team at the law firm Bindmans, agrees with Burbidge. The clients Ive had from the tech world are pretty evenly split by gender, he says. But, because it encourages risk-taking, tech doesnt fit well with maternity and pregnancy, so that can be a source of a lot of friction. You see people in the industry who see pregnancy as a genuine problem. That, he says, is the main cause of gender-specific issues in technology at least, those that reach the stage of requiring a legal recourse.

Internal documents such as Damores are the soft end of the sort of hostile working environment female employees can face at overwhelmingly male tech firms. At the extreme end, as companies such as Uber and Tinder have learned, this environment can result in claims of sexual harassment and illegal discrimination.

At Uber, where 85% of technical employees are male, one engineer, Susan Fowler, wrote a tell-all blogpost that revealed a workplace where managers proposition female employees for sex and human resources does little to stop the issue. Tinder faced a similar scandal when former VP Whitney Wolfe sued the company over atrocious misogyny in 2014, entering into evidence abusive texts allegedly sent by Tinders chairman, Sean Rad.

Beyond the egregious cases, the wider culture of even the most diverse Silicon Valley firms can still end up being offputting to would-be employees: the campus-style culture, which encourages workers to be on site from dawn till dusk, renders it hard for any primary caregiver to be part of the team, while in some companies an antipathy for part-time work or on-site creches can also limit flexibility.

Addressing the gender gap isnt only an issue of perception. Companies with homogenous workforces make worse products and earn less money, argues Guha. We know large numbers of women are struggling to get funding. A female founder is 86% less likely to be funded than a man, she says. Thats crazy when we know the return on investment is higher; it is about 34% higher for companies with a gender diverse leadership. Its not about corporate social responsibility: a diverse range of thinking will bring better value for the company.

As we move into a future in which algorithms have greater influence on our lives from communication to healthcare, transport to the law the gender balance in tech companies goes beyond what is fair for their employees. The result of male domination of tech has led to the development of, for example, voice recognition technologies that, trained and tested solely by men, struggle to understand female voices. It has resulted in virtual reality technologies that disproportionally impose motion sickness on women. At this early moment in its history, the tech industry is already littered with products that have gender bias effectively programmed into them.

The most objectionable point about that memo was the notion that there are biological differences that make women less capable, said Burbidge. Obviously, I have an issue with that and I think its fundamentally incorrect. The thing I cant answer is how, in 2017, do you stop people thinking that? I dont know how you change peoples minds.

As we go into the world of AI, when people are designing algorithms that help us live our lives, it will be very bad if thats all done by men, says Hall. Social care, looking after kids, so many aspects of our lives. We really need as many people as possible doing this. Its really important and its going to get more important.

Hall invokes her late mentor Karen Sprck Jones, a pioneering British computer scientist who campaigned hard to encourage more women into the field. As she used to say: Computing is too important to be left to men.

Read more: https://www.theguardian.com/lifeandstyle/2017/aug/08/why-are-there-so-few-women-in-tech-the-truth-behind-the-google-memo

Technorati Tags: , , , ,

GoldieBlox agreed to pay $1m to charity in Beastie Boys settlement

Argument over 'fair use' of Girls song ended in March with deal to donate to STEM education for girls. By Stuart Dredge

In March, the Beastie Boys reached a settlement with the US toy company GoldieBlox over the latters parody of their song Girls in a viral advert. Now the details of that settlement have been published.

GoldieBlox agreed to pay $1m to a charity of the bands choice supporting science, technology, engineering and/or maths (STEM) education for girls, in return for a backdated licence to use the track in the ad, which was a YouTube hit in November 2013.

The details were revealed in a document filed as part of a separate copyright infringement lawsuit between the Beastie Boys and Monster Energy, spotted by entertainment industry journalist Eriq Gardner:

On March 16, 2014, the parties settled The GoldieBlox Settlement granted GoldieBlox a retroactive license to use the musical composition of Girls between November 18, 2013 and November 28, 2013.

“In exchange, GoldieBlox agreed to make annual payments of 1% of its gross revenue, until the total payments reached $1 million, to a charitable organization chosen by the Beastie Boys and approved by GoldieBlox which supports science, technology, engineering and/or mathematics education for girls.

The settlement will not be used as evidence in the new case after a complaint from Monster, which is accused of using several Beastie Boys songs in a 2012 snowboarding video without approval from the band.

The GoldieBlox video, which reworked the original sexist lyrics of Girls to focus on encouraging girls to explore STEM subjects, was watched 8m times in a week last November, before sparking lawsuits from both sides and a debate over whether the video’s use of the song was “fair use” or not.

The Beastie Boys explained their decision in an open letter to GoldieBlox that month, saying they were respecting the wishes of member Adam Yauch, who died in 2012 and left a will explicitly banning use of his music in advertisements:

As creative as it is, make no mistake, your video is an advertisement that is designed to sell a product, and long ago we made a conscious decision not to permit our music and/or name to be used in product ads.

GoldieBlox responded with its own open letter after removing the track from the video:

We don’t want to fight with you When we made our parody version of your song, Girls, we did it with the best intentions. We wanted to transform it into a powerful anthem for girls … Although we believe our parody video falls under fair use, we would like to respect [Yauch’s] wishes and yours.

It remains to be seen how soon the $1m settlement is paid. GoldieBlox raised just under $286k on the crowdfunding site Kickstarter in 2012 to launch its business, which produces construction toys and books for girls, but its sales since the toys launched in March 2013 are unknown.

GoldieBlox: the toy designed to get girls into engineering

Read more: https://www.theguardian.com/technology/2014/may/13/goldieblox-beastie-boys-girls-settlement

Technorati Tags: , , , , , , , , , ,

Your animal life is over. Machine life has begun. The road to immortality

In California, radical scientists and billionaire backers think the technology to extend life by uploading minds to exist separately from the body is only a few years away

Heres what happens. You are lying on an operating table, fully conscious, but rendered otherwise insensible, otherwise incapable of movement. A humanoid machine appears at your side, bowing to its task with ceremonial formality. With a brisk sequence of motions, the machine removes a large panel of bone from the rear of your cranium, before carefully laying its fingers, fine and delicate as a spiders legs, on the viscid surface of your brain. You may be experiencing some misgivings about the procedure at this point. Put them aside, if you can.

Youre in pretty deep with this thing; theres no backing out now. With their high-resolution microscopic receptors, the machine fingers scan the chemical structure of your brain, transferring the data to a powerful computer on the other side of the operating table. They are sinking further into your cerebral matter now, these fingers, scanning deeper and deeper layers of neurons, building a three-dimensional map of their endlessly complex interrelations, all the while creating code to model this activity in the computers hardware. As thework proceeds, another mechanical appendage less delicate, less careful removes the scanned material to a biological waste container for later disposal. This is material you will no longer be needing.

At some point, you become aware that you are no longer present in your body. You observe with sadness, or horror, or detached curiosity the diminishing spasms of that body on the operating table, the last useless convulsions of a discontinued meat.

The animal life is over now. The machine life has begun.

This, more or less, is the scenario outlined by Hans Moravec, a professor of cognitive robotics at Carnegie Mellon, in his 1988 book Mind Children: The Future of Robot and Human Intelligence. It is Moravecs conviction that the future of the human species will involve a mass-scale desertion of our biological bodies, effected by procedures of this kind. Its a belief shared by many transhumanists, a movement whose aim is to improve our bodies and minds to the point where we become something other and better than the animals we are. Ray Kurzweil, for one, is a prominent advocate of the idea of mind-uploading. An emulation of the human brain running on an electronic system, he writes in The Singularity Is Near, would run much faster than our biological brains. Although human brains benefit from massive parallelism (on the order of 100 trillion interneuronal connections, all potentially operating simultaneously), the rest time of the connections is extremely slow compared to contemporary electronics. The technologies required for such an emulation sufficiently powerful and capacious computers and sufficiently advanced brainscanning techniques will be available, he announces, by the early 2030s.

And this, obviously, is no small claim. We are talking about not just radically extended life spans, but also radically expanded cognitive abilities. We are talking about endless copies and iterations of the self. Having undergone a procedure like this, you would exist to the extent you could meaningfully be said to exist at all as an entity of unbounded possibilities.

I was introduced to Randal Koene at a Bay Area transhumanist conference. He wasnt speaking at the conference, but had come along out of personal interest. A cheerfully reserved man in his early 40s, he spoke in the punctilious staccato of a non-native English speaker who had long mastered the language. As we parted, he handed me his business card and much later that evening Iremoved it from my wallet and had a proper look at it. The card was illustrated with a picture of a laptop, on whose screen was displayed a stylised image of a brain. Underneath was printed what seemed to me an attractively mysterious message: Carboncopies: Realistic Routes to Substrate Independent Minds. Randal A Koene, founder.

I took out my laptop and went to the website of Carboncopies, which I learned was a nonprofit organisation with a goal of advancing the reverse engineering of neural tissue and complete brains, Whole Brain Emulation and development of neuroprostheses that reproduce functions of mind, creating what we call Substrate Independent Minds. This latter term, I read, was the objective to be able to sustain person-specific functions of mind and experience in many different operational substrates besides the biological brain. And this, I further learned, was a process analogous to that by which platform independent code can be compiled and run on many different computing platforms.

It seemed that I had met, without realising it, a person who was actively working toward the kind of brain-uploading scenario that Kurzweil had outlined in The Singularity Is Near. And this was a person I needed to get to know.

Randal
Randal Koene: It wasnt like I was walking into labs, telling people I wanted to upload human minds to computers.

Koene was an affable and precisely eloquent man and his conversation was unusually engaging for someone so forbiddingly intelligent and who worked in so rarefied a field as computational neuroscience; so, in his company, I often found myself momentarily forgetting about the nearly unthinkable implications of the work he was doing, the profound metaphysical weirdness of the things he was explaining to me. Hed be talking about some tangential topic his happily cordial relationship with his ex-wife, say, or the cultural differences between European and American scientific communities and Id remember with a slow, uncanny suffusion of unease that his work, were it to yield the kind of results he is aiming for, would amount to the most significant event since the evolution of Homo sapiens. The odds seemed pretty long from where I was standing, but then again, I reminded myself, the history of science was in many ways an almanac of highly unlikely victories.

One evening in early spring, Koene drove down to San Francisco from the North Bay, where he lived and worked in a rented ranch house surrounded by rabbits, to meet me for dinner in a small Argentinian restaurant on Columbus Avenue. The faint trace of an accent turned out to be Dutch. Koene was born in Groningen and had spent most of his early childhood in Haarlem. His father was a particle physicist and there were frequent moves, including a two-year stint in Winnipeg, as he followed his work from one experimental nuclear facility to the next.

Now a boyish 43, he had lived in California only for the past five years, but had come to think of it as home, or the closest thing to home hed encountered in the course of a nomadic life. And much of this had to do with the culture of techno-progressivism that had spread outward from its concentrated origins in Silicon Valley and come to encompass the entire Bay Area, with its historically high turnover of radical ideas. It had been a while now, he said, since hed described his work to someone, only for them to react as though he were making a misjudged joke or simply to walk off mid-conversation.

In his early teens, Koene began to conceive of the major problem with the human brain in computational terms: it was not, like a computer, readable and rewritable. You couldnt get in there and enhance it, make it run more efficiently, like you could with lines of code. You couldnt just speed up a neuron like you could with a computer processor.

Around this time, he read Arthur C Clarkes The City and the Stars, a novel set a billion years from now, in which the enclosed city of Diaspar is ruled by a superintelligent Central Computer, which creates bodies for the citys posthuman citizens and stores their minds in its memory banks at the end of their lives, for purposes of reincarnation. Koene saw nothing in this idea of reducing human beings to data that seemed to him implausible and felt nothing in himself that prevented him from working to bring it about. His parents encouraged him in this peculiar interest and the scientific prospect of preserving human minds in hardware became a regular topic of dinnertime conversation.

Computational neuroscience, which drew its practitioners not from biology but from the fields of mathematics and physics, seemed to offer the most promising approach to the problem of mapping and uploading the mind. It wasnt until he began using the internet in the mid-1990s, though, that he discovered a loose community of people with an interest in the same area.

As a PhD student in computational neuroscience at Montreals McGill University, Koene was initially cautious about revealing the underlying motivation for his studies, for fear of being taken for a fantasist or an eccentric.

I didnt hide it, as such, he said, but it wasnt like I was walking into labs, telling people I wanted to upload human minds to computers either. Id work with people on some related area, like the encoding of memory, with a view to figuring out how that might fit into an overall road map for whole brain emulation.

Having worked for a while at Halcyon Molecular, a Silicon Valley gene-sequencing and nanotechnology startup funded by Peter Thiel, he decided to stay in the Bay Area and start his own nonprofit company aimed at advancing the cause to which hed long been dedicated: carboncopies

Koenes decision was rooted in the very reason he began pursuing that work in the first place: an anxious awareness of the small and diminishing store of days that remained to him. If hed gone the university route, hed have had to devote most of his time, at least until securing tenure, to projects that were at best tangentially relevant to his central enterprise. The path he had chosen was a difficult one for a scientist and he lived and worked from one small infusion of private funding to the next.

But Silicon Valleys culture of radical techno-optimism had been its own sustaining force for him, and a source of financial backing for a project that took its place within the wildly aspirational ethic of that cultural context. There were people there or thereabouts, wealthy and influential, for whom a future in which human minds might be uploaded to computers was one to be actively sought, a problem to be solved, disruptively innovated, by the application of money.

Transcendence
Brainchild of the movies: in Transcendence (2014), scientist Will Caster, played by Johnny Depp, uploads his mind to a computer program with dangerous results.

One such person was Dmitry Itskov, a 36-year-old Russian tech multimillionaire and founder of the 2045 Initiative, an organisationwhose stated aim was to create technologies enabling the transfer of an individuals personality to a more advanced nonbiological carrier, and extending life, including to the point of immortality. One of Itskovs projects was the creation of avatars artificial humanoid bodies that would be controlled through brain-computer interface, technologies that would be complementary with uploaded minds. He had funded Koenes work with Carboncopies and in 2013 they organised a conference in New York called Global Futures 2045, aimed, according to its promotional blurb, at the discussion of a new evolutionary strategy for humanity.

When we spoke, Koene was working with another tech entrepreneur named Bryan Johnson, who had sold his automated payment company to PayPal a couple of years back for $800m and who now controlled a venture capital concern called the OS Fund, which, I learned from its website, invests in entrepreneurs working towards quantum leap discoveries that promise to rewrite the operating systems of life. This language struck me as strange and unsettling in a way that revealed something crucial about the attitude toward human experience that was spreading outward from its Bay Area centre a cluster of software metaphors that had metastasised into a way of thinking about what it meant to be a human being.

And it was the sameessential metaphor that lay at the heart of Koenes project: the mind as a piece of software, an application running on the platform of flesh. When he used the term emulation, he was using it explicitly to evoke the sense in which a PCs operating system could be emulated on a Mac, as what he called platform independent code.

The relevant science for whole brain emulation is, as youd expect, hideously complicated, and its interpretation deeply ambiguous, but if I can risk a gross oversimplification here, I will say that it is possible to conceive of the idea as something like this: first, you scan the pertinent information in a persons brain the neurons, the endlessly ramifying connections between them, the information-processing activity of which consciousness is seen as a byproduct through whatever technology, or combination of technologies, becomes feasible first (nanobots, electron microscopy, etc). That scan then becomes a blueprint for the reconstruction of the subject brains neural networks, which is then converted into a computational model. Finally, you emulate all of this on a third-party non-flesh-based substrate: some kind of supercomputer or a humanoid machine designed to reproduce and extend the experience of embodiment something, perhaps, like Natasha Vita-Mores Primo Posthuman.

The whole point of substrate independence, as Koene pointed out to me whenever I asked him what it would be like to exist outside of a human body, and I asked him many times, in various ways was that it would be like no one thing, because there would be no one substrate, no one medium of being. This was the concept transhumanists referred to as morphological freedom the liberty to take any bodily form technology permits.

You can be anything you like, as an article about uploading in Extropy magazine put it in the mid-90s. You can be big or small; you can be lighter than air and fly; you can teleport and walk through walls. You can be a lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a ceiling.

What really interested me about this idea was not how strange and far-fetched it seemed (though it ticked those boxes resolutely enough), but rather how fundamentally identifiable it was, how universal. When talking to Koene, I was mostly trying to get to grips with the feasibility of the project and with what it was he envisioned as a desirable outcome. But then we would part company I would hang up the call, or I would take my leave and start walking toward the nearest station and I would find myself feeling strangely affected by the whole project, strangely moved.

Because there was something, in the end, paradoxically and definitively human in this desire for liberation from human form. I found myself thinking often of WB Yeatss Sailing to Byzantium, in which the ageing poet writes of his burning to be free of the weakening body, the sickening heart to abandon the dying animal for the manmade and immortal form of a mechanical bird. Once out of nature, he writes, I shall never take/ My bodily form from any natural thing/ But such a form as Grecian goldsmiths make.

One evening, we were sitting outside a combination bar/laundromat/standup comedy venue in Folsom Street a place with the fortuitous name of BrainWash when I confessed that the idea of having my mind uploaded to some technological substrate was deeply unappealing to me, horrifying even. The effects of technology on my life, even now, were something about which I was profoundly ambivalent; for all I had gained in convenience and connectedness, I was increasingly aware of the extent to which my movements in the world were mediated and circumscribed by corporations whose only real interest was in reducing the lives of human beings to data, as a means to further reducing us to profit.

The content we consumed, the people with whom we had romantic encounters, the news we read about the outside world: all these movements were coming increasingly under the influence of unseen algorithms, the creations of these corporations, whose complicity with government, moreover, had come to seem like the great submerged narrative of our time. Given the world we were living in, where the fragile liberal ideal of the autonomous self was already receding like a half-remembered dream into the doubtful haze of history, wouldnt a radical fusion of ourselves with technology amount, in the end, to a final capitulation of the very idea of personhood?

Koene nodded again and took a sip of his beer.

Hearing you say that, he said, makes it clear that theres a major hurdle there for people. Im more comfortable than you are with the idea, but thats because Ive been exposed to it for so long that Ive just got used to it.

Dmitry
Russian billionaire Dmitry Itskov wants to create technologies enabling the transfer of an individuals personality to a more advanced nonbiological carrier. Photograph: Mary Altaffer/AP

In the weeks and months after I returned from San Francisco, I thought obsessively about the idea of whole brain emulation. One morning, I was at home in Dublin, suffering from both a head cold and a hangover. I lay there, idly considering hauling myself out of bed to join my wife and my son, who were in his bedroom next door enjoying a raucous game of Buckaroo. I realised that these conditions (head cold, hangover) had imposed upon me a regime of mild bodily estrangement. As often happens when Im feeling under the weather, I had a sense of myself as an irreducibly biological thing, an assemblage of flesh and blood and gristle. I felt myself to be an organism with blocked nasal passages, a bacteria-ravaged throat, a sorrowful ache deep within its skull, its cephalon. I was aware of my substrate, in short, because my substrate felt like shit.

And I was gripped by a sudden curiosity as to what, precisely, that substrate consisted of, as to what I myself happened, technically speaking, to be. I reached across for the phone on my nightstand and entered into Google the words What is the human… The first three autocomplete suggestions offered What is The Human Centipede about, and then: What is the human body made of, and then: What is the human condition.

It was the second question I wanted answered at this particular time, as perhaps a back door into the third. It turned out that I was 65% oxygen, which is to say that I was mostly air, mostly nothing. After that, I was composed of diminishing quantities of carbon and hydrogen, of calcium and sulphur and chlorine, and so on down the elemental table. I was also mildly surprised to learn that, like the iPhone I was extracting this information from, I also contained trace elements of copper and iron and silicon.

What a piece of work is a man, I thought, what a quintessence of dust.

Some minutes later, my wife entered the bedroom on her hands and knees, our son on her back, gripping the collar of her shirt tight in his little fists. She was making clip-clop noises as she crawled forward, he was laughing giddily and shouting: Dont buck! Dont buck!

With a loud neighing sound, she arched her back and sent him tumbling gently into a row of shoes by the wall and he screamed in delighted outrage, before climbing up again. None of this, I felt, could be rendered in code. None of this, I felt, could be run on any other substrate. Their beauty was bodily, in the most profound sense, in the saddest and most wonderful sense.

I never loved my wife and our little boy more, I realised, than when I thought of them as mammals. I dragged myself, my animal body, out of bed to join them.

To Be a Machine by Mark OConnell is published by Granta (12.99). To order a copy for 11.04 go to bookshop.theguardian.com or call 0330 333 6846. Free UK p&p over 10, online orders only. Phone orders min p&p of 1.99

Read more: https://www.theguardian.com/science/2017/mar/25/animal-life-is-over-machine-life-has-begun-road-to-immortality

Technorati Tags: , , , , , , , , ,

Your animal life is over. Machine life has begun. The road to immortality

In California, radical scientists and billionaire backers think the technology to extend life by uploading minds to exist separately from the body is only a few years away

Heres what happens. You are lying on an operating table, fully conscious, but rendered otherwise insensible, otherwise incapable of movement. A humanoid machine appears at your side, bowing to its task with ceremonial formality. With a brisk sequence of motions, the machine removes a large panel of bone from the rear of your cranium, before carefully laying its fingers, fine and delicate as a spiders legs, on the viscid surface of your brain. You may be experiencing some misgivings about the procedure at this point. Put them aside, if you can.

Youre in pretty deep with this thing; theres no backing out now. With their high-resolution microscopic receptors, the machine fingers scan the chemical structure of your brain, transferring the data to a powerful computer on the other side of the operating table. They are sinking further into your cerebral matter now, these fingers, scanning deeper and deeper layers of neurons, building a three-dimensional map of their endlessly complex interrelations, all the while creating code to model this activity in the computers hardware. As thework proceeds, another mechanical appendage less delicate, less careful removes the scanned material to a biological waste container for later disposal. This is material you will no longer be needing.

At some point, you become aware that you are no longer present in your body. You observe with sadness, or horror, or detached curiosity the diminishing spasms of that body on the operating table, the last useless convulsions of a discontinued meat.

The animal life is over now. The machine life has begun.

This, more or less, is the scenario outlined by Hans Moravec, a professor of cognitive robotics at Carnegie Mellon, in his 1988 book Mind Children: The Future of Robot and Human Intelligence. It is Moravecs conviction that the future of the human species will involve a mass-scale desertion of our biological bodies, effected by procedures of this kind. Its a belief shared by many transhumanists, a movement whose aim is to improve our bodies and minds to the point where we become something other and better than the animals we are. Ray Kurzweil, for one, is a prominent advocate of the idea of mind-uploading. An emulation of the human brain running on an electronic system, he writes in The Singularity Is Near, would run much faster than our biological brains. Although human brains benefit from massive parallelism (on the order of 100 trillion interneuronal connections, all potentially operating simultaneously), the rest time of the connections is extremely slow compared to contemporary electronics. The technologies required for such an emulation sufficiently powerful and capacious computers and sufficiently advanced brainscanning techniques will be available, he announces, by the early 2030s.

And this, obviously, is no small claim. We are talking about not just radically extended life spans, but also radically expanded cognitive abilities. We are talking about endless copies and iterations of the self. Having undergone a procedure like this, you would exist to the extent you could meaningfully be said to exist at all as an entity of unbounded possibilities.

I was introduced to Randal Koene at a Bay Area transhumanist conference. He wasnt speaking at the conference, but had come along out of personal interest. A cheerfully reserved man in his early 40s, he spoke in the punctilious staccato of a non-native English speaker who had long mastered the language. As we parted, he handed me his business card and much later that evening Iremoved it from my wallet and had a proper look at it. The card was illustrated with a picture of a laptop, on whose screen was displayed a stylised image of a brain. Underneath was printed what seemed to me an attractively mysterious message: Carboncopies: Realistic Routes to Substrate Independent Minds. Randal A Koene, founder.

I took out my laptop and went to the website of Carboncopies, which I learned was a nonprofit organisation with a goal of advancing the reverse engineering of neural tissue and complete brains, Whole Brain Emulation and development of neuroprostheses that reproduce functions of mind, creating what we call Substrate Independent Minds. This latter term, I read, was the objective to be able to sustain person-specific functions of mind and experience in many different operational substrates besides the biological brain. And this, I further learned, was a process analogous to that by which platform independent code can be compiled and run on many different computing platforms.

It seemed that I had met, without realising it, a person who was actively working toward the kind of brain-uploading scenario that Kurzweil had outlined in The Singularity Is Near. And this was a person I needed to get to know.

Randal
Randal Koene: It wasnt like I was walking into labs, telling people I wanted to upload human minds to computers.

Koene was an affable and precisely eloquent man and his conversation was unusually engaging for someone so forbiddingly intelligent and who worked in so rarefied a field as computational neuroscience; so, in his company, I often found myself momentarily forgetting about the nearly unthinkable implications of the work he was doing, the profound metaphysical weirdness of the things he was explaining to me. Hed be talking about some tangential topic his happily cordial relationship with his ex-wife, say, or the cultural differences between European and American scientific communities and Id remember with a slow, uncanny suffusion of unease that his work, were it to yield the kind of results he is aiming for, would amount to the most significant event since the evolution of Homo sapiens. The odds seemed pretty long from where I was standing, but then again, I reminded myself, the history of science was in many ways an almanac of highly unlikely victories.

One evening in early spring, Koene drove down to San Francisco from the North Bay, where he lived and worked in a rented ranch house surrounded by rabbits, to meet me for dinner in a small Argentinian restaurant on Columbus Avenue. The faint trace of an accent turned out to be Dutch. Koene was born in Groningen and had spent most of his early childhood in Haarlem. His father was a particle physicist and there were frequent moves, including a two-year stint in Winnipeg, as he followed his work from one experimental nuclear facility to the next.

Now a boyish 43, he had lived in California only for the past five years, but had come to think of it as home, or the closest thing to home hed encountered in the course of a nomadic life. And much of this had to do with the culture of techno-progressivism that had spread outward from its concentrated origins in Silicon Valley and come to encompass the entire Bay Area, with its historically high turnover of radical ideas. It had been a while now, he said, since hed described his work to someone, only for them to react as though he were making a misjudged joke or simply to walk off mid-conversation.

In his early teens, Koene began to conceive of the major problem with the human brain in computational terms: it was not, like a computer, readable and rewritable. You couldnt get in there and enhance it, make it run more efficiently, like you could with lines of code. You couldnt just speed up a neuron like you could with a computer processor.

Around this time, he read Arthur C Clarkes The City and the Stars, a novel set a billion years from now, in which the enclosed city of Diaspar is ruled by a superintelligent Central Computer, which creates bodies for the citys posthuman citizens and stores their minds in its memory banks at the end of their lives, for purposes of reincarnation. Koene saw nothing in this idea of reducing human beings to data that seemed to him implausible and felt nothing in himself that prevented him from working to bring it about. His parents encouraged him in this peculiar interest and the scientific prospect of preserving human minds in hardware became a regular topic of dinnertime conversation.

Computational neuroscience, which drew its practitioners not from biology but from the fields of mathematics and physics, seemed to offer the most promising approach to the problem of mapping and uploading the mind. It wasnt until he began using the internet in the mid-1990s, though, that he discovered a loose community of people with an interest in the same area.

As a PhD student in computational neuroscience at Montreals McGill University, Koene was initially cautious about revealing the underlying motivation for his studies, for fear of being taken for a fantasist or an eccentric.

I didnt hide it, as such, he said, but it wasnt like I was walking into labs, telling people I wanted to upload human minds to computers either. Id work with people on some related area, like the encoding of memory, with a view to figuring out how that might fit into an overall road map for whole brain emulation.

Having worked for a while at Halcyon Molecular, a Silicon Valley gene-sequencing and nanotechnology startup funded by Peter Thiel, he decided to stay in the Bay Area and start his own nonprofit company aimed at advancing the cause to which hed long been dedicated: carboncopies

Koenes decision was rooted in the very reason he began pursuing that work in the first place: an anxious awareness of the small and diminishing store of days that remained to him. If hed gone the university route, hed have had to devote most of his time, at least until securing tenure, to projects that were at best tangentially relevant to his central enterprise. The path he had chosen was a difficult one for a scientist and he lived and worked from one small infusion of private funding to the next.

But Silicon Valleys culture of radical techno-optimism had been its own sustaining force for him, and a source of financial backing for a project that took its place within the wildly aspirational ethic of that cultural context. There were people there or thereabouts, wealthy and influential, for whom a future in which human minds might be uploaded to computers was one to be actively sought, a problem to be solved, disruptively innovated, by the application of money.

Transcendence
Brainchild of the movies: in Transcendence (2014), scientist Will Caster, played by Johnny Depp, uploads his mind to a computer program with dangerous results.

One such person was Dmitry Itskov, a 36-year-old Russian tech multimillionaire and founder of the 2045 Initiative, an organisationwhose stated aim was to create technologies enabling the transfer of an individuals personality to a more advanced nonbiological carrier, and extending life, including to the point of immortality. One of Itskovs projects was the creation of avatars artificial humanoid bodies that would be controlled through brain-computer interface, technologies that would be complementary with uploaded minds. He had funded Koenes work with Carboncopies and in 2013 they organised a conference in New York called Global Futures 2045, aimed, according to its promotional blurb, at the discussion of a new evolutionary strategy for humanity.

When we spoke, Koene was working with another tech entrepreneur named Bryan Johnson, who had sold his automated payment company to PayPal a couple of years back for $800m and who now controlled a venture capital concern called the OS Fund, which, I learned from its website, invests in entrepreneurs working towards quantum leap discoveries that promise to rewrite the operating systems of life. This language struck me as strange and unsettling in a way that revealed something crucial about the attitude toward human experience that was spreading outward from its Bay Area centre a cluster of software metaphors that had metastasised into a way of thinking about what it meant to be a human being.

And it was the sameessential metaphor that lay at the heart of Koenes project: the mind as a piece of software, an application running on the platform of flesh. When he used the term emulation, he was using it explicitly to evoke the sense in which a PCs operating system could be emulated on a Mac, as what he called platform independent code.

The relevant science for whole brain emulation is, as youd expect, hideously complicated, and its interpretation deeply ambiguous, but if I can risk a gross oversimplification here, I will say that it is possible to conceive of the idea as something like this: first, you scan the pertinent information in a persons brain the neurons, the endlessly ramifying connections between them, the information-processing activity of which consciousness is seen as a byproduct through whatever technology, or combination of technologies, becomes feasible first (nanobots, electron microscopy, etc). That scan then becomes a blueprint for the reconstruction of the subject brains neural networks, which is then converted into a computational model. Finally, you emulate all of this on a third-party non-flesh-based substrate: some kind of supercomputer or a humanoid machine designed to reproduce and extend the experience of embodiment something, perhaps, like Natasha Vita-Mores Primo Posthuman.

The whole point of substrate independence, as Koene pointed out to me whenever I asked him what it would be like to exist outside of a human body, and I asked him many times, in various ways was that it would be like no one thing, because there would be no one substrate, no one medium of being. This was the concept transhumanists referred to as morphological freedom the liberty to take any bodily form technology permits.

You can be anything you like, as an article about uploading in Extropy magazine put it in the mid-90s. You can be big or small; you can be lighter than air and fly; you can teleport and walk through walls. You can be a lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a ceiling.

What really interested me about this idea was not how strange and far-fetched it seemed (though it ticked those boxes resolutely enough), but rather how fundamentally identifiable it was, how universal. When talking to Koene, I was mostly trying to get to grips with the feasibility of the project and with what it was he envisioned as a desirable outcome. But then we would part company I would hang up the call, or I would take my leave and start walking toward the nearest station and I would find myself feeling strangely affected by the whole project, strangely moved.

Because there was something, in the end, paradoxically and definitively human in this desire for liberation from human form. I found myself thinking often of WB Yeatss Sailing to Byzantium, in which the ageing poet writes of his burning to be free of the weakening body, the sickening heart to abandon the dying animal for the manmade and immortal form of a mechanical bird. Once out of nature, he writes, I shall never take/ My bodily form from any natural thing/ But such a form as Grecian goldsmiths make.

One evening, we were sitting outside a combination bar/laundromat/standup comedy venue in Folsom Street a place with the fortuitous name of BrainWash when I confessed that the idea of having my mind uploaded to some technological substrate was deeply unappealing to me, horrifying even. The effects of technology on my life, even now, were something about which I was profoundly ambivalent; for all I had gained in convenience and connectedness, I was increasingly aware of the extent to which my movements in the world were mediated and circumscribed by corporations whose only real interest was in reducing the lives of human beings to data, as a means to further reducing us to profit.

The content we consumed, the people with whom we had romantic encounters, the news we read about the outside world: all these movements were coming increasingly under the influence of unseen algorithms, the creations of these corporations, whose complicity with government, moreover, had come to seem like the great submerged narrative of our time. Given the world we were living in, where the fragile liberal ideal of the autonomous self was already receding like a half-remembered dream into the doubtful haze of history, wouldnt a radical fusion of ourselves with technology amount, in the end, to a final capitulation of the very idea of personhood?

Koene nodded again and took a sip of his beer.

Hearing you say that, he said, makes it clear that theres a major hurdle there for people. Im more comfortable than you are with the idea, but thats because Ive been exposed to it for so long that Ive just got used to it.

Dmitry
Russian billionaire Dmitry Itskov wants to create technologies enabling the transfer of an individuals personality to a more advanced nonbiological carrier. Photograph: Mary Altaffer/AP

In the weeks and months after I returned from San Francisco, I thought obsessively about the idea of whole brain emulation. One morning, I was at home in Dublin, suffering from both a head cold and a hangover. I lay there, idly considering hauling myself out of bed to join my wife and my son, who were in his bedroom next door enjoying a raucous game of Buckaroo. I realised that these conditions (head cold, hangover) had imposed upon me a regime of mild bodily estrangement. As often happens when Im feeling under the weather, I had a sense of myself as an irreducibly biological thing, an assemblage of flesh and blood and gristle. I felt myself to be an organism with blocked nasal passages, a bacteria-ravaged throat, a sorrowful ache deep within its skull, its cephalon. I was aware of my substrate, in short, because my substrate felt like shit.

And I was gripped by a sudden curiosity as to what, precisely, that substrate consisted of, as to what I myself happened, technically speaking, to be. I reached across for the phone on my nightstand and entered into Google the words What is the human… The first three autocomplete suggestions offered What is The Human Centipede about, and then: What is the human body made of, and then: What is the human condition.

It was the second question I wanted answered at this particular time, as perhaps a back door into the third. It turned out that I was 65% oxygen, which is to say that I was mostly air, mostly nothing. After that, I was composed of diminishing quantities of carbon and hydrogen, of calcium and sulphur and chlorine, and so on down the elemental table. I was also mildly surprised to learn that, like the iPhone I was extracting this information from, I also contained trace elements of copper and iron and silicon.

What a piece of work is a man, I thought, what a quintessence of dust.

Some minutes later, my wife entered the bedroom on her hands and knees, our son on her back, gripping the collar of her shirt tight in his little fists. She was making clip-clop noises as she crawled forward, he was laughing giddily and shouting: Dont buck! Dont buck!

With a loud neighing sound, she arched her back and sent him tumbling gently into a row of shoes by the wall and he screamed in delighted outrage, before climbing up again. None of this, I felt, could be rendered in code. None of this, I felt, could be run on any other substrate. Their beauty was bodily, in the most profound sense, in the saddest and most wonderful sense.

I never loved my wife and our little boy more, I realised, than when I thought of them as mammals. I dragged myself, my animal body, out of bed to join them.

To Be a Machine by Mark OConnell is published by Granta (12.99). To order a copy for 11.04 go to bookshop.theguardian.com or call 0330 333 6846. Free UK p&p over 10, online orders only. Phone orders min p&p of 1.99

Read more: https://www.theguardian.com/science/2017/mar/25/animal-life-is-over-machine-life-has-begun-road-to-immortality

Save

Technorati Tags: , , , , , , , ,

How The Insights Of The Large Hadron Collider Are Being Made Open To Everyone

The ConversationIf you visit the Large Hadron Collider (LHC) exhibition, now at the Queensland Museum, youll see the recreation of a moment when the scientist who saw the first results indicating discovery of the Higgs boson laments she cant yet tell anyone.

Its a transitory problem for her, lasting as long as it takes for the result to be thoroughly cross-checked. But it illustrates a key concept in science: its not enough to do it; it must be communicated.

Thats what is behind one of the lesser known initiatives of CERN (European Organization for Nuclear Research): an ambitious plan to make all its research in particle physics available to everyone, with a big global collaboration inspired by the way scientists came together to make discoveries at the LHC.

This initiative is called SCOAP, the Sponsoring Consortium for Open Access in Particle Physics Publishing, and is now about to enter its fourth year of operation. Its a worldwide collaboration of more than 3,000 libraries (including six in Australia), key funding agencies and research centres in 44 countries, together with three intergovernmental organisations.

It aims to make work previously only available to paying subscribers of academic journals freely and immediately available to everyone. In its first three years it has made more than 13,000 articles available.

Not only are these articles free for anyone to read, but because they are published under a Creative Commons attribution license (CCBY), they are also available for anyone to use in anyway they wish, such as to illustrate a talk, pass onto a class of school children, or feed to an artificial intelligence program to extract information from. And these usage rights are enshrined forever.

Open science

The concept of sharing research is not new in physics. Open access to research is now a growing worldwide initiative, including in Australasia. CERN, which runs the LHC, was also where the world wide web was invented in 1989 by Tim Berners-Lee, a British computer scientist at CERN.

The main purpose of the web was to enable researchers contributing to CERN from all over the world share documents, including scientific drafts, no matter what computer systems they were using.

Before the web, physicists had been sharing paper drafts by post for decades, so they were one of the first groups to really embrace the new online opportunities for sharing early research. Today, the pre-press site arxiv.org has more than a million free article drafts covering physics, mathematics, astronomy and more.

But, with such a specialised field, do these open access papers really matter? The short answer is yes. Downloads have doubled to journals participating in SCOAP.

With millions of open access articles now being downloaded across all specialities, there is enormous opportunity for new ideas and collaborations to spring from chance readership. This is an important trend: the concept of serendipity enabled by open access was explored in 2015 in an episode of ABC RNs Future Tense program.

Greater than the sum of the parts

Theres also a bigger picture to SCOAPs open access model. Not long ago, the research literature was fragmented. Individual papers and the connections between them were only as good as the physical library, with its paper journals, that academics had access to.

Now we can do searches in much less time than we spend thinking of the search question, and the results we are presented with are crucially dependent on how easily available the findings themselves are. And availability is not just a function of whether an article is free or not but whether it is truly open, i.e. connected and reusable.

One concept is whether research is FAIR, or Findable, Accessible, Interoperable and Reusable. In short, can anyone find, read, use and reuse the work?

The principle is most advanced for data, but in Australia work is ongoing to apply it to all research outputs. This approach was also proposed at the November 2016 meeting of the G20 Science, Technology and Innovation Ministers Meeting. Research findings that are not FAIR can, effectively, be invisible. Its a huge waste of millions of taxpayer dollars to fund research that wont be seen.

There is an even bigger picture that research and research publications have to fit into: that of science in society.

Across the world we see politicians challenging accepted scientific norms. Is the fact that most academic research remains available only to those who can pay to see it contributing to an acceptance of such misinformed views?

If one role for science is to inform public debate, then restricting access to that science will necessarily hinder any informed public debate. Although no one suggests that most readers of news sites will regularly want to delve into the details of papers in high energy physics, open access papers are 47% more likely to end up being cited in Wikipedia, which is a source that many non-scientists do turn to.

Even worse, work that is not available openly now may not even be available in perpetuity, something that is being discussed by scientists in the USA.

So in the same way that CERN itself is an example of the power of international collaboration to ask some of the fundamental scientific questions of our time, SCOAP provides a way to ensure that the answers, whatever they are, are available to everyone, forever.

Virginia Barbour, Executive Director, Australasian Open Access Strategy Group, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Read more: http://www.iflscience.com/physics/how-the-insights-of-the-large-hadron-collider-are-being-made-open-to-everyone/

Technorati Tags: , , ,

How the Hitchhikers Guide can make the world a better place | Marcus ODair

Douglas Adamss sci-fi classic has inspired real-life tech innovations. So what else could we rip from its pages to aid our ailing society?

The Mobile World Congress, which takes place annually in Barcelona, is usually dominated by smartphones. Grabbing headlines this year, however, is the Pilot earpiece and its promise to instantly translate languages: a real-life version of the Babel Fish from The Hitchhikers Guide to the Galaxy.

The knife that toasts became a reality in 2015: its called the FurzoToasto.

It is not the first time that elements of science fiction from Douglas Adamss story have subsequently become science fact. The technology that allows the Hitchhikers Guide to be operated simply by brushing with ones fingers is now familiar from smartphones and tablets. The information the Guide stores, meanwhile, is user-generated, and constantly updated; the approach adopted by Wikipedia. And the sub-etha telecommunications network? Thats the internet, even if it doesnt yet extend across the entire Milky Way. Even the knife that toasts became a reality in 2015: its called the FurzoToasto. So which of Douglas Adamss other inventions should scientists bring to life?

Crisis inducer

Though it resembles a wristwatch, this product carries out a very different function: it convinces the wearer that a crisis is imminent. The severity of the crisis can be preselected by the user, but its always enough to get the adrenaline pumping. The ultimate cure for lethargy.

Joo Janta 200 Super-Chromatic Peril Sensitive Sunglasses

If the crisis is, on the other hand, all too real, these sunglasses offer a solution: at the first sign of danger, they turn opaque. OK, a relaxed attitude to danger might represent only a short-term solution but, for those few moments, ignorance is bliss. Could be useful in 2017.

Infinite Improbability Drive

The Infinite Improbability Drive, the key feature of the Heart of Gold spaceship, can carry out any conceivable action, providing that someone on board knows precisely how improbable that action is. It can, for instance, transform a pair of missiles into a sperm whale and a bowl of petunias, as well as facilitating interstellar travel. Just what we need in the Ministry of Defence.

The Infinite Improbability Drive in action

Total Perspective Vortex

Though powered by a piece of fairy cake, this machine is far from innocuous: in fact, in the Hitchhikers world, exposure to the Total Perspective Vortex is the ultimate form of torture, worse even than Vogon poetry. It does this by revealing to users their cosmic insignificance. Might be useful for reining in the egos of certain politicians.

Nutri-matic drinks dispenser

This vending machine wont issue a drink until it has analysed the users taste buds, metabolism and brain. Collecting all this data is pointless, however, as the machine always ultimately dispenses the same thing: a shoddy cup of tea. A properly bespoke drinks dispenser, however, sounds appealing and, in the era of big data and artificial intelligence, it might not be too far off. Mines a Pan Galactic Gargle Blaster.

Bistromathic Drive

Part of the appeal of Adamss story lies in its combination of sci-fi and the mundane: for all the planet-hopping, The Hitchhikers Guide also fits neatly into a line of English comedy running from Fawlty Towers to Peep Show. The Bistromathic Drive harnesses the unfathomable mathematics of restaurants in order to power a spaceship of extraordinary powers. Next time youre trying to split a bill between a large number of diners, few of whom are paying in cash, imagine you could use those very same mathematical quirks to travel across interstellar distances.

The Restaurant at the End of the Universe

You might be getting a sense, by now, that Douglas Adams liked restaurants but he never visited one 576 thousand million years in the future. His protagonists, however, enjoy the benefits of time travel, and so are able to visit to Milliways, billed as the Restaurant at the End of the Universe. At Milliways, diners watch the whole of creation destroyed, night after night: apocalypse as background entertainment. Theres no need to book (you can reserve a table retrospectively, when you return to your own time) and the meal is free too: just deposit a single penny in your own era, and the compound interest will take care of even the most exorbitant bill. An instant solution to the cost-of-living crisis.

Point of view gun

If you point it at someone and pull the trigger, he or she will instantly see things from your point of view.

As Stephen Fry, playing the Guide, tells us in the film version of The Hitchhikers Guide to the Galaxy, the point of view gun does precisely what its name suggests: if you point it at someone and pull the trigger, he or she will instantly see things from your point of view. Instant empathy. Something the past 12 months have been sorely lacking.

Read more: https://www.theguardian.com/commentisfree/2017/mar/06/hitchhikers-guide-to-the-galaxy-technology-sci-fi-books

Technorati Tags: , , , , , , , ,

How statistics lost their power and why we should fear what comes next | William Davies

The Long Read: The ability of statistics to accurately represent the world is declining. In its wake, a new age of big data controlled by private companies is taking over and putting democracy in peril

In theory, statistics should help settle arguments. They ought to provide stable reference points that everyone no matter what their politics can agree on. Yet in recent years, divergent levels of trust in statistics has become one of the key schisms that have opened up in western liberal democracies. Shortly before the November presidential election, a study in the US discovered that 68% of Trump supporters distrusted the economic data published by the federal government. In the UK, a research project by Cambridge University and YouGov looking at conspiracy theories discovered that 55% of the population believes that the government is hiding the truth about the number of immigrants living here.

Rather than diffusing controversy and polarisation, it seems as if statistics are actually stoking them. Antipathy to statistics has become one of the hallmarks of the populist right, with statisticians and economists chief among the various experts that were ostensibly rejected by voters in 2016. Not only are statistics viewed by many as untrustworthy, there appears to be something almost insulting or arrogant about them. Reducing social and economic issues to numerical aggregates and averages seems to violate some peoples sense of political decency.

Nowhere is this more vividly manifest than with immigration. The thinktank British Future has studied how best to win arguments in favour of immigration and multiculturalism. One of its main findings is that people often respond warmly to qualitative evidence, such as the stories of individual migrants and photographs of diverse communities. But statistics especially regarding alleged benefits of migration to Britains economy elicit quite the opposite reaction. People assume that the numbers are manipulated and dislike the elitism of resorting to quantitative evidence. Presented with official estimates of how many immigrants are in the country illegally, a common response is to scoff. Far from increasing support for immigration, British Future found, pointing to its positive effect on GDP can actually make people more hostile to it. GDP itself has come to seem like a Trojan horse for an elitist liberal agenda. Sensing this, politicians have now largely abandoned discussing immigration in economic terms.

All of this presents a serious challenge for liberal democracy. Put bluntly, the British government its officials, experts, advisers and many of its politicians does believe that immigration is on balance good for the economy. The British government did believe that Brexit was the wrong choice. The problem is that the government is now engaged in self-censorship, for fear of provoking people further.

This is an unwelcome dilemma. Either the state continues to make claims that it believes to be valid and is accused by sceptics of propaganda, or else, politicians and officials are confined to saying what feels plausible and intuitively true, but may ultimately be inaccurate. Either way, politics becomes mired in accusations of lies and cover-ups.

The declining authority of statistics and the experts who analyse them is at the heart of the crisis that has become known as post-truth politics. And in this uncertain new world, attitudes towards quantitative expertise have become increasingly divided. From one perspective, grounding politics in statistics is elitist, undemocratic and oblivious to peoples emotional investments in their community and nation. It is just one more way that privileged people in London, Washington DC or Brussels seek to impose their worldview on everybody else. From the opposite perspective, statistics are quite the opposite of elitist. They enable journalists, citizens and politicians to discuss society as a whole, not on the basis of anecdote, sentiment or prejudice, but in ways that can be validated. The alternative to quantitative expertise is less likely to be democracy than an unleashing of tabloid editors and demagogues to provide their own truth of what is going on across society.

Is there a way out of this polarisation? Must we simply choose between a politics of facts and one of emotions, or is there another way of looking at this situation?One way is to view statistics through the lens of their history.We need to try and see them for what they are: neither unquestionable truths nor elite conspiracies, but rather as tools designed to simplify the job of government, for better or worse. Viewed historically, we can see what a crucial role statistics have played in our understanding of nation states and their progress. This raises the alarming question of how if at all we will continue to have common ideas of society and collective progress, should statistics fall by the wayside.


In the second half of the 17th century, in the aftermath of prolonged and bloody conflicts, European rulers adopted an entirely new perspective on the task of government, focused upon demographic trends an approach made possible by the birth of modern statistics. Since ancient times, censuses had been used to track population size, but these were costly and laborious to carry out and focused on citizens who were considered politically important (property-owning men), rather than society as a whole. Statistics offered something quite different, transforming the nature of politics in the process.

Statistics were designed to give an understanding of a population in its entirety,rather than simply to pinpoint strategically valuable sources of power and wealth. In the early days, this didnt always involve producing numbers. In Germany, for example (from where we get the term Statistik) the challenge was to map disparate customs, institutions and laws across an empire of hundreds of micro-states. What characterised this knowledge as statistical was its holistic nature: it aimed to produce a picture of the nation as a whole. Statistics would do for populations what cartography did for territory.

Equally significant was the inspiration of the natural sciences. Thanks to standardised measures and mathematical techniques, statistical knowledge could be presented as objective, in much the same way as astronomy. Pioneering English demographers such as William Petty and John Graunt adapted mathematical techniques to estimate population changes, for which they were hired by Oliver Cromwell and Charles II.

The emergence in the late 17th century of government advisers claiming scientific authority, rather than political or military acumen, represents the origins of the expert culture now so reviled by populists. These path-breaking individuals were neither pure scholars nor government officials, but hovered somewhere between the two. They were enthusiastic amateurs who offered a new way of thinking about populations that privileged aggregates and objective facts. Thanks to their mathematical prowess, they believed they could calculate what would otherwise require a vast census to discover.

There was initially only one client for this type of expertise, and the clue is in the word statistics. Only centralised nation states had the capacity to collect data across large populations in a standardised fashion and only states had any need for such data in the first place. Over the second half of the 18th century, European states began to collect more statistics of the sort that would appear familiar to us today. Casting an eye over national populations, states became focused upon a range of quantities: births, deaths, baptisms, marriages, harvests, imports, exports, price fluctuations. Things that would previously have been registered locally and variously at parish level became aggregated at a national level.

New techniques were developed to represent these indicators, which exploited both the vertical and horizontal dimensions of the page, laying out data in matrices and tables, just as merchants had done with the development of standardised book-keeping techniques in the late 15th century. Organising numbers into rows and columns offered a powerful new way of displaying the attributes of a given society. Large, complex issues could now be surveyed simply by scanning the data laid out geometrically across a single page.

These innovations carried extraordinary potential for governments. By simplifying diverse populations down to specific indicators, and displaying them in suitable tables, governments could circumvent the need to acquire broader detailed local and historical insight. Of course, viewed from a different perspective, this blindness to local cultural variability is precisely what makes statistics vulgar and potentially offensive. Regardless of whether a given nation had any common cultural identity, statisticians would assume some standard uniformity or, some might argue, impose that uniformity upon it.

Not every aspect of a given population can be captured by statistics. There is always an implicit choice in what is included and what is excluded, and this choice can become a political issue in its own right. The fact that GDP only captures the value of paid work, thereby excluding the work traditionally done by women in the domestic sphere, has made it a target of feminist critique since the 1960s. In France, it has been illegal to collect census data on ethnicity since 1978, on the basis that such data could be used for racist political purposes. (This has the side-effect of making systemic racism in the labour market much harder to quantify.)

Despite these criticisms, the aspiration to depict a society in its entirety, and to do so in an objective fashion, has meant that various progressive ideals have been attached to statistics. The image of statistics as a dispassionate science of society is only one part of the story. The other part is about how powerful political ideals became invested in these techniques: ideals of evidence-based policy, rationality, progress and nationhood grounded in facts, rather than in romanticised stories.


Since the high-point of the Enlightenmentin the late 18th century, liberals and republicans have invested great hope that national measurement frameworks could produce a more rational politics, organised around demonstrable improvements in social and economic life. The great theorist of nationalism, Benedict Anderson, famously described nations as imagined communities,but statistics offer the promise of anchoring this imagination in something tangible. Equally, they promise to reveal what historical path the nation is on: what kind of progress is occurring? How rapidly? For Enlightenment liberals, who saw nations as moving in a single historical direction, this question was crucial.

The potential of statistics to reveal the state of the nation was seized in post-revolutionary France. The Jacobin state set about imposing a whole new framework of national measurement and national data collection. The worlds first official bureau of statistics was opened in Paris in 1800. Uniformity of data collection, overseen by a centralised cadre of highly educated experts, was an integral part of the ideal of a centrally governed republic, which sought to establish a unified, egalitarian society.

From the Enlightenment onwards, statistics played an increasingly important role in the public sphere, informing debate in the media, providing social movements with evidence they could use. Over time, the production and analysis of such data became less dominated by the state. Academic social scientists began to analyse data for their own purposes, often entirely unconnected to government policy goals. By the late 19th century, reformers such as Charles Booth in London and WEB Du Bois in Philadelphia were conducting their own surveys to understand urban poverty.

Illustration
Illustration by Guardian Design

To recognise how statistics have been entangled in notions of national progress, consider the case of GDP. GDP is an estimate of the sum total of a nations consumer spending, government spending, investments and trade balance (exports minus imports), which is represented in a single number. This is fiendishly difficult to get right, and efforts to calculate this figure began, like so many mathematical techniques, as a matter of marginal, somewhat nerdish interest during the 1930s. It was only elevated to a matter of national political urgency by the second world war, when governments needed to know whether the national population was producing enough to keep up the war effort. In the decades that followed, this single indicator, though never without its critics, took on a hallowed political status, as the ultimate barometer of a governments competence. Whether GDP is rising or falling is now virtually a proxy for whether society is moving forwards or backwards.

Or take the example of opinion polling, an early instance of statistical innovation occurring in the private sector. During the 1920s, statisticians developed methods for identifying a representative sample of survey respondents, so as to glean the attitudes of the public as a whole. This breakthrough, which was first seized upon by market researchers, soon led to the birth of the opinion polling. This new industry immediately became the object of public and political fascination, as the media reported on what this new science told us about what women or Americans or manual labourers thought about the world.

Nowadays, the flaws of polling are endlessly picked apart. But this is partly due to the tremendous hopes that have been invested in polling since its origins. It is only to the extent that we believe in mass democracy that we are so fascinated or concerned by what the public thinks. But for the most part it is thanks to statistics, and not to democratic institutions as such, that we can know what the public thinks about specific issues. We underestimate how much of our sense of the public interest is rooted in expert calculation, as opposed to democratic institutions.

As indicators of health, prosperity, equality, opinion and quality of life have come to tell us who we are collectively and whether things are getting better or worse, politicians have leaned heavily on statistics to buttress their authority. Often, they lean too heavily, stretching evidence too far, interpreting data too loosely, to serve their cause. But that is an inevitable hazard of the prevalence of numbers in public life, and need not necessarily trigger the type of wholehearted rejections of expertise that we have witnessed recently.

In many ways, the contemporary populist attack on experts is born out of the same resentment as the attack on elected representatives. In talking of society as a whole, in seeking to govern the economy as a whole, both politicians and technocrats are believed to have lost touch with how it feels to be a single citizen in particular. Both statisticians and politicians have fallen into the trap of seeing like a state, to use a phrase from the anarchist political thinker James C Scott. Speaking scientifically about the nation for instance in terms of macroeconomics is an insult to those who would prefer to rely on memory and narrative for their sense of nationhood, and are sick of being told that their imagined community does not exist.

On the other hand, statistics (together with elected representatives) performed an adequate job of supporting a credible public discourse for decades if not centuries. What changed?


The crisis of statistics is not quite as sudden as it might seem. For roughly 450 years, the great achievement of statisticians has been to reduce the complexity and fluidity of national populations into manageable, comprehensible facts and figures. Yet in recent decades, the world has changed dramatically, thanks to the cultural politics that emerged in the 1960s and the reshaping of the global economy that began soon after. It is not clear that the statisticians have always kept pace with these changes. Traditional forms of statistical classification and definition are coming under strain from more fluid identities, attitudes and economic pathways. Efforts to represent demographic, social and economic changes in terms of simple, well-recognised indicators are losing legitimacy.

Consider the changing political and economic geography of nation states over the past 40 years. The statistics that dominate political debate are largely national in character: poverty levels, unemployment, GDP, net migration. But the geography of capitalism has been pulling in somewhat different directions. Plainly globalisation has not rendered geography irrelevant. In many cases it has made the location of economic activity far more important, exacerbating the inequality between successful locations (such as London or San Francisco) and less successful locations (such as north-east England or the US rust belt). The key geographic units involved are no longer nation states. Rather, it is cities, regions or individual urban neighbourhoods that are rising and falling.

The Enlightenment ideal of the nation as a single community, bound together by a common measurement framework, is harder and harder to sustain. If you live in one of the towns in the Welsh valleys that was once dependent on steel manufacturing or mining for jobs, politicians talking of how the economy is doing well are likely to breed additional resentment. From that standpoint, the term GDP fails to capture anything meaningful or credible.

When macroeconomics is used to make a political argument, this implies that the losses in one part of the country are offset by gains somewhere else. Headline-grabbing national indicators, such as GDP and inflation, conceal all sorts of localised gains and losses that are less commonly discussed by national politicians. Immigration may be good for the economy overall, but this does not mean that there are no local costs at all. So when politicians use national indicators to make their case, they implicitly assume some spirit of patriotic mutual sacrifice on the part of voters: you might be the loser on this occasion, but next time you might be the beneficiary. But what if the tables are never turned? What if the same city or region wins over and over again, while others always lose? On what principle of give and take is that justified?

In Europe, the currency union has exacerbated this problem. The indicators that matter to the European Central Bank (ECB), for example, are those representing half a billion people. The ECB is concerned with the inflation or unemployment rate across the eurozone as if it were a single homogeneous territory, at the same time as the economic fate of European citizens is splintering in different directions, depending on which region, city or neighbourhood they happen to live in. Official knowledge becomes ever more abstracted from lived experience, until that knowledge simply ceases to be relevant or credible.

The privileging of the nation as the natural scale of analysis is one of the inbuilt biases of statistics that years of economic change has eaten away at. Another inbuilt bias that is coming under increasing strain is classification. Part of the job of statisticians is to classify people by putting them into a range of boxes that the statistician has created: employed or unemployed, married or unmarried, pro-Europe or anti-Europe. So long as people can be placed into categories in this way, it becomes possible to discern how far a given classification extends across the population.

This can involve somewhat reductive choices. To count as unemployed, for example, a person has to report to a survey that they are involuntarily out of work, even if it may be more complicated than that in reality. Many people move in and out of work all the time, for reasons that might have as much to do with health and family needs as labour market conditions. But thanks to this simplification, it becomes possible to identify the rate of unemployment across the population as a whole.

Heres a problem, though. What if many of the defining questions of our age are not answerable in terms of the extent of people encompassed, but the intensity with which people are affected? Unemployment is one example. The fact that Britain got through the Great Recession of 2008-13 without unemployment rising substantially is generally viewed as a positive achievement. But the focus on unemployment masked the rise of underemployment, that is, people not getting a sufficient amount of work or being employed at a level below that which they are qualified for. This currently accounts for around 6% of the employed labour force. Then there is the rise of the self-employed workforce, where the divide between employed and involuntarily unemployed makes little sense.

This is not a criticism of bodies such as the Office for National Statistics (ONS), which does now produce data on underemployment. But so long as politicians continue to deflect criticism by pointing to the unemployment rate, the experiences of those struggling to get enough work or to live on their wages go unrepresented in public debate. It wouldnt be all that surprising if these same people became suspicious of policy experts and the use of statistics in political debate, given the mismatch between what politicians say about the labour market and the lived reality.

The rise of identity politics since the 1960s has put additional strain on such systems of classification. Statistical data is only credible if people will accept the limited range of demographic categories that are on offer, which are selected by the expert not the respondent. But where identity becomes a political issue, people demand to define themselves on their own terms, where gender, sexuality, race or class is concerned.

Opinion polling may be suffering for similar reasons. Polls have traditionally captured peoples attitudes and preferences, on the reasonable assumption that people will behave accordingly. But in an age of declining political participation, it is not enough simply to know which box someone would prefer to put an X in. One also needs to know whether they feel strongly enough about doing so to bother. And when it comes to capturing such fluctuations in emotional intensity, polling is a clumsy tool.

Statistics have faced criticism regularly over their long history. The challenges that identity politics and globalisation present to them are not new either. Why then do the events of the past year feel quite so damaging to the ideal of quantitative expertise and its role in political debate?


In recent years, a new way of quantifyingand visualising populations has emerged that potentially pushes statistics to the margins, ushering in a different era altogether. Statistics, collected and compiled by technical experts, are giving way to data that accumulates by default, as a consequence of sweeping digitisation. Traditionally, statisticians have known which questions they wanted to ask regarding which population, then set out to answer them. By contrast, data is automatically produced whenever we swipe a loyalty card, comment on Facebook or search for something on Google. As our cities, cars, homes and household objects become digitally connected, the amount of data we leave in our trail will grow even greater. In this new world, data is captured first and research questions come later.

In the long term, the implications of this will probably be as profound as the invention of statistics was in the late 17th century. The rise of big data provides far greater opportunities for quantitative analysis than any amount of polling or statistical modelling. But it is not just the quantity of data that is different. It represents an entirely different type of knowledge, accompanied by a new mode of expertise.

First, there is no fixed scale of analysis (such as the nation) nor any settled categories (such as unemployed). These vast new data sets can be mined in search of patterns, trends, correlations and emergent moods. It becomes a way of tracking the identities that people bestow upon themselves (such as #ImwithCorbyn or entrepreneur) rather than imposing classifications upon them. This is a form of aggregation suitable to a more fluid political age, in which not everything can be reliably referred back to some Enlightenment ideal of the nation state as guardian of the public interest.

Second, the majority of us are entirely oblivious to what all this data says about us, either individually or collectively. There is no equivalent of an Office for National Statistics for commercially collected big data. We live in an age in which our feelings, identities and affiliations can be tracked and analysed with unprecedented speed and sensitivity but there is nothing that anchors this new capacity in the public interest or public debate. There are data analysts who work for Google and Facebook, but they are not experts of the sort who generate statistics and who are now so widely condemned. The anonymity and secrecy of the new analysts potentially makes them far more politically powerful than any social scientist.

A company such as Facebook has the capacity to carry quantitative social science on hundreds of millions of people, at very low cost. But it has very little incentive to reveal the results. In 2014, when Facebook researchers published results of a study of emotional contagion that they had carried out on their users in which they altered news feeds to see how it affected the content that users then shared in response there was an outcry that people were being unwittingly experimented on. So, from Facebooks point of view, why go to all the hassle of publishing? Why not just do the study and keep quiet?


What is most politically significantabout this shift from a logic of statistics to one of data is how comfortably it sits with the rise of populism. Populist leaders can heap scorn upon traditional experts, such as economists and pollsters, while trusting in a different form of numerical analysis altogether. Such politicians rely on a new, less visible elite, who seek out patterns from vast data banks, but rarely make any public pronouncements, let alone publish any evidence. These data analysts are often physicists or mathematicians, whose skills are not developed for the study of society at all. This, for example, is the worldview propagated by Dominic Cummings, former adviser to Michael Gove and campaign director of Vote Leave. Physics, mathematics and computer science are domains in which there are real experts, unlike macro-economic forecasting, Cummings has argued.

Figures close to Donald Trump, such as his chief strategist Steve Bannon and the Silicon Valley billionaire Peter Thiel, are closely acquainted with cutting-edge data analytics techniques, via companies such as Cambridge Analytica, on whose board Bannon sits. During the presidential election campaign, Cambridge Analytica drew on various data sources to develop psychological profiles of millions of Americans, which it then used to help Trump target voters with tailored messaging.

This ability to develop and refine psychological insights across large populations is one of the most innovative and controversial features of the new data analysis. As techniques of sentiment analysis, which detect the mood of large numbers of people by tracking indicators such as word usage on social media, become incorporated into political campaigns, the emotional allure of figures such as Trump will become amenable to scientific scrutiny. In a world where the political feelings of the general public are becoming this traceable, who needs pollsters?

Few social findings arising from this kind of data analytics ever end up in the public domain. This means that it does very little to help anchor political narrative in any shared reality. With the authority of statistics waning, and nothing stepping into the public sphere to replace it, people can live in whatever imagined community they feel most aligned to and willing to believe in. Where statistics can be used to correct faulty claims about the economy or society or population, in an age of data analytics there are few mechanisms to prevent people from giving way to their instinctive reactions or emotional prejudices. On the contrary, companies such as Cambridge Analytica treat those feelings as things to be tracked.

But even if there were an Office for Data Analytics, acting on behalf of the public and government as the ONS does, it is not clear that it would offer the kind of neutral perspective that liberals today are struggling to defend. The new apparatus of number-crunching is well suited to detecting trends, sensing the mood and spotting things as they bubble up. It serves campaign managers and marketers very well. It is less well suited to making the kinds of unambiguous, objective, potentially consensus-forming claims about society that statisticians and economists are paid for.

In this new technical and political climate, it will fall to the new digital elite to identify the facts, projections and truth amid the rushing stream of data that results. Whether indicators such as GDP and unemployment continue to carry political clout remains to be seen, but if they dont, it wont necessarily herald the end of experts, less still the end of truth. The question to be taken more seriously, now that numbers are being constantly generated behind our backs and beyond our knowledge, is where the crisis of statistics leaves representative democracy.

On the one hand, it is worth recognising the capacity of long-standing political institutions to fight back. Just as sharing economy platforms such as Uber and Airbnb have recently been thwarted by legal rulings (Uber being compelled to recognise drivers as employees, Airbnb being banned altogether by some municipal authorities), privacy and human rights law represents a potential obstacle to the extension of data analytics. What is less clear is how the benefits of digital analytics might ever be offered to the public, in the way that many statistical data sets are. Bodies such as the Open Data Institute, co-founded by Tim Berners-Lee, campaign to make data publicly available, but have little leverage over the corporations where so much of our data now accumulates. Statistics began life as a tool through which the state could view society, but gradually developed into something that academics, civic reformers and businesses had a stake in. But for many data analytics firms, secrecy surrounding methods and sources of data is a competitive advantage that they will not give up voluntarily.

A post-statistical society is a potentially frightening proposition, not because it would lack any forms of truth or expertise altogether, but because it would drastically privatise them. Statistics are one of many pillars of liberalism, indeed of Enlightenment. The experts who produce and use them have become painted as arrogant and oblivious to the emotional and local dimensions of politics. No doubt there are ways in which data collection could be adapted to reflect lived experiences better. But the battle that will need to be waged in the long term is not between an elite-led politics of facts versus a populist politics of feeling. It is between those still committed to public knowledge and public argument and those who profit from the ongoing disintegration of those things.

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.

Read more: https://www.theguardian.com/politics/2017/jan/19/crisis-of-statistics-big-data-democracy

Technorati Tags: , , , , ,

How The Insights Of The Large Hadron Collider Are Being Made Open To Everyone

The ConversationIf you visit the Large Hadron Collider (LHC) exhibition, now at the Queensland Museum, youll see the recreation of a moment when the scientist who saw the first results indicating discovery of the Higgs boson laments she cant yet tell anyone.

Its a transitory problem for her, lasting as long as it takes for the result to be thoroughly cross-checked. But it illustrates a key concept in science: its not enough to do it; it must be communicated.

Thats what is behind one of the lesser known initiatives of CERN (European Organization for Nuclear Research): an ambitious plan to make all its research in particle physics available to everyone, with a big global collaboration inspired by the way scientists came together to make discoveries at the LHC.

This initiative is called SCOAP, the Sponsoring Consortium for Open Access in Particle Physics Publishing, and is now about to enter its fourth year of operation. Its a worldwide collaboration of more than 3,000 libraries (including six in Australia), key funding agencies and research centres in 44 countries, together with three intergovernmental organisations.

It aims to make work previously only available to paying subscribers of academic journals freely and immediately available to everyone. In its first three years it has made more than 13,000 articles available.

Not only are these articles free for anyone to read, but because they are published under a Creative Commons attribution license (CCBY), they are also available for anyone to use in anyway they wish, such as to illustrate a talk, pass onto a class of school children, or feed to an artificial intelligence program to extract information from. And these usage rights are enshrined forever.

Open science

The concept of sharing research is not new in physics. Open access to research is now a growing worldwide initiative, including in Australasia. CERN, which runs the LHC, was also where the world wide web was invented in 1989 by Tim Berners-Lee, a British computer scientist at CERN.

The main purpose of the web was to enable researchers contributing to CERN from all over the world share documents, including scientific drafts, no matter what computer systems they were using.

Before the web, physicists had been sharing paper drafts by post for decades, so they were one of the first groups to really embrace the new online opportunities for sharing early research. Today, the pre-press site arxiv.org has more than a million free article drafts covering physics, mathematics, astronomy and more.

But, with such a specialised field, do these open access papers really matter? The short answer is yes. Downloads have doubled to journals participating in SCOAP.

With millions of open access articles now being downloaded across all specialities, there is enormous opportunity for new ideas and collaborations to spring from chance readership. This is an important trend: the concept of serendipity enabled by open access was explored in 2015 in an episode of ABC RNs Future Tense program.

Greater than the sum of the parts

Theres also a bigger picture to SCOAPs open access model. Not long ago, the research literature was fragmented. Individual papers and the connections between them were only as good as the physical library, with its paper journals, that academics had access to.

Now we can do searches in much less time than we spend thinking of the search question, and the results we are presented with are crucially dependent on how easily available the findings themselves are. And availability is not just a function of whether an article is free or not but whether it is truly open, i.e. connected and reusable.

One concept is whether research is FAIR, or Findable, Accessible, Interoperable and Reusable. In short, can anyone find, read, use and reuse the work?

The principle is most advanced for data, but in Australia work is ongoing to apply it to all research outputs. This approach was also proposed at the November 2016 meeting of the G20 Science, Technology and Innovation Ministers Meeting. Research findings that are not FAIR can, effectively, be invisible. Its a huge waste of millions of taxpayer dollars to fund research that wont be seen.

There is an even bigger picture that research and research publications have to fit into: that of science in society.

Across the world we see politicians challenging accepted scientific norms. Is the fact that most academic research remains available only to those who can pay to see it contributing to an acceptance of such misinformed views?

If one role for science is to inform public debate, then restricting access to that science will necessarily hinder any informed public debate. Although no one suggests that most readers of news sites will regularly want to delve into the details of papers in high energy physics, open access papers are 47% more likely to end up being cited in Wikipedia, which is a source that many non-scientists do turn to.

Even worse, work that is not available openly now may not even be available in perpetuity, something that is being discussed by scientists in the USA.

So in the same way that CERN itself is an example of the power of international collaboration to ask some of the fundamental scientific questions of our time, SCOAP provides a way to ensure that the answers, whatever they are, are available to everyone, forever.

Virginia Barbour, Executive Director, Australasian Open Access Strategy Group, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Read more: http://www.iflscience.com/physics/how-the-insights-of-the-large-hadron-collider-are-being-made-open-to-everyone/

Technorati Tags: , , ,

2017s big ideas part one: from driverless cars to interstellar travel

James Dyson is excited about the SafetyNet invention, Jim Al-Khalili cant wait to study Saturn up close and Amanda Levete looks to a resurgence of civic space

Transport

Mass production of driverless cars
By Jimmy Wales

The human brain is an amazing machine. It can make an unperceivable number of calculations a second. This outstanding ability is widely implemented during one of the most neurologically challenging actions people are engaged with on a daily basis: driving.

Several areas of the brain act in collaboration in order to receive, process, prioritise and implement real-time data perceived during driving. These complex processes may pass unnoticed by the driver, but their uninterrupted functioning is crucial.

The difference between life and death might be determined by a delay of only 100 milliseconds in response time. At high speeds, this micro timeframe can translate into several feet, which may in turn be the difference between avoiding danger and a fatal crash. Such a minor delay may be caused by any minimal distraction: a sudden noise, a quick glance at the phone or a random thought.

So what I am most excited about for 2017 is the groundbreaking invention that has the ability to minimise these dangers and potentially save millions of lives on the road: driverless cars.

We are getting closer than we thought, faster than we imagined, to having mass production of safe and reliable driverless cars. Many people have heard about this innovation, but not many realise how fast it is coming and how dramatically it is going to change society.

In 2016, it is estimated that worldwide automobile accidents claimed the lives of more than 1.1 million people, while more than 31 million people were injured. Once this technology is commonplace and driverless cars are ubiquitous, those numbers will shrink to a tiny fraction of what they are today.

The social impact will be even greater, to an extent that is very hard to fully imagine right now. Driverless cars will make car-sharing so much easier and more efficient that we could make do with 80% fewer cars. That would translate into less environmental pollution by decreased fuel consumption, less traffic congestion, fewer hours wasted on the roads and less need for car parks. Roads could be laid out very differently, making traffic more efficient and safer for passengers and pedestrians.

Modern technology excels in saving us precious time and making our daily lives easier. The next technological innovation will also make our roads much safer.

Jimmy Wales is an American internet entrepreneur and the co-founder of Wikipedia and Wikia.

Diet

Food goes back to basics
By Thomasina Miers

The past few years has been all about fad diets, cutting out food groups, and buying expensive ingredients to chase superfoods and super health. None of this is realistic. And after a year in which our foundations have been rocked, I feel that dieting adds an unhealthy uncertainty to our lives that we really dont need.

Food should not be about denial, guilt or killing ourselves. It is about nurturing, comfort and spending time with people who are important to us. It is about comradeship and community and breaking down barriers. We need that more than ever.

Next year will be about simplifying and going back to basics in the kitchen. The healthiest way to eat is to go as close to the source as possible. Lots of vegetables, which are cheap; lots of grains and beans. Meat only occasionally, and when it has been well looked after. My point isnt that we spend hours or a fortune in the kitchen, just that we adopt an old-fashioned approach where we avoid processed food. I have three children and zero spare time, but we eat well. Dinner is often just kale sauteed in garlic and olive oil on toast with a fried egg on top.

I think well see this in restaurants, too. When was the last time you heard anyone raving about a 20-course tasting menu? It feels as though that is from the last decade. Now its all short menus and home cooking and milk from cows who might actually have eaten some grass in their lives. There is a comfort in that, and I think it plays into deeper insecurities many of us are experiencing.

Thomasina Miers is a cook, food writer and broadcaster, and the founder of the Wahaca chain of Mexican restaurants.

Astronomy

The Cassini missons grand finale from Saturn
By Jim Al-Khalili

An
An image of Saturn from the Cassini mission. Photograph: Nasa/AP

When it comes to physics and astronomy, there have been a number of important stories in recent years that captured the publics imagination. Look no further than the discovery of the Higgs boson at the Large Hadron Collider in 2012 or the first detection of gravitational waves in 2016: ripples in the fabric of space itself due to the collision of two black holes more than a billion light years away. Cool stuff. And who knows what might be just around the corner? While I cannot predict what discoveries will be made in 2017, I can say with some confidence that there is one science story guaranteed to make waves around the world.

Of all the planets in the solar system, Saturn, with its beautiful rings, is without doubt the most enigmatic and mysterious, and in recent years weve had the privilege of being able to study it up close and personal thanks to the pictures beamed back to us by the Cassini spacecraft.

The Cassini mission to the giant planet has provided us with jaw-droppingly stunning colour images of Saturns surface, its rings and its many moons. And weve also made some astonishing discoveries. For example, it has revealed jets of water vapour and organic material shooting out of the south pole of Enceladus, creating tremendous excitement that this tiny moon might even be able to support microbial life beneath its icy surface.

But rest assured, the best is yet to come. In 2017, Cassini will come to the end of its mission, 20 years after it was launched in 1997. Nasa is calling this the Grand Finale, and its going to be pretty spectacular In tighter and tighter orbits, over several weeks, the spacecraft is going to squeeze inside the innermost ring, skimming the surface of the planet ever more closely before finally disappearing beneath the clouds and plummeting to its death.

For the Nasa scientists, it is going to be a huge challenge to collect as much data as possible during those final days, and there is no guarantee that Cassinis instruments will work in the increasingly hostile conditions. They are hoping it will continue to beam back what it sees for as long as it can before being ultimately crushed by the incredible density and pressure within the gas giant. Cue tingles down spines, lumps in throats and tears in eyes all round.

Jim Al-Khalili is a broadcaster and a professor of physics and public engagement in science at the University of Surrey.

Environment

A solution for overfishing
By Sir James Dyson

2017 promises to be an exciting year for SafetyNet, a fishing net with a series of escape rings that help prevent young and endangered fish being caught. The invention, which is engineered by Dan Watson, won the James Dyson award in 2012 because it helps to address the very real problem of overfishing.

SafetyNet exploits the escape behaviour of fish. Small and medium-sized fish swim upwards when stressed, whereas larger fish tend to swim downwards. SafetyNet has illuminated escape rings on its top side, which act like an emergency exit sign for the smaller fish. Water flowing through the wide-open meshes guides them to freedom, while the larger ones are retained in the net.

Since winning the award, SafetyNet has been getting ever closer to making a global impact. Trials show that the number of undesired fish caught is reduced by more than half when SafetyNet is used. With trials set to continue around the world in 2017, I hope that the next round of testing will continue to build awareness of the terrible problem of overfishing.

In 2017, SafetyNet technology will also go on sale to fishermen for the first time, with the first batches available in the middle of the year. But Watson also has his sights set on influencing the wider industry for the better. He will give a presentation on the topic of overfishing to the directorate-general for maritime affairs and fisheries in Brussels, to attract the attention of industry regulators and potentially shape future legislation.

Nearly half of fish caught are thrown back into the sea because they are not suitable to be sold, and many dont survive. If a significant number of young fish are being killed unnecessarily, this has an impact on the overall fish population. The best inventions use engineering and technology to solve existing problems and make the world a better place. SafetyNet shows how young graduates such as Watson can tackle global issues, all too often ignored by established industries, in new and inventive ways.

James Dyson is a British inventor and industrial designer, and the founder of Dyson.

Neuroscience

Neural networks and their effect on Alzheimers disease
By Prof May-Britt Moser

In this post-fact era, I believe that scientists engagement with society will be more important than ever before. We need to do our part in building public trust in science, by ensuring that our papers and talks are as solid and true to data as possible, but also by making sure the knowledge we produce is made accessible for people.

I am excited about the novel results from our lab that we will share with the world in 2017. In our everyday lives, we rely on our ability to navigate and remember. Inside the brain, these cognitive functions have a physiological correlate as specific patterns of activity among nerve cells. Networks of communicating nerve cells form activity maps that each give rise to a specific function. In 2017, we will share new insights into the emergence and maturation of the cells and neural networks that give rise to higher cognitive functions like self-location and memory. These cells are also ground zero, and the very first to be affected by neurodegenerative diseases such as Alzheimers. Knowing about how these cells develop into functional networks, giving rise to cognition and behavior, may help us understand what goes wrong when memory and navigation breaks down in people who are diagnosed with Alzheimers disease.

May-Britt Moser is a Nobel prize-winning psychologist and neuroscientist at the Norwegian University of Science and Technology in Trondheim.

The arts

The documenta exhibition revives the notion of utopia for a dystopian world
By Stefan Kalmr

In 1955, art professor and curator Arnold Bode founded the documenta art exhibition in the West German city of Kassel, once considered by Hitler for the German capital.

Documenta was originally initiated to introduce, or rather reintroduce, art formerly branded by the Nazis as degenerate to the postwar German public. This exhibition has, over the past 61 years, become the Olympus of all exhibitions. It is not a biennial; it is a vision, a proposition and a utopia in a hopelessly dystopian world.

Adam Szymczyk, the curator of documenta 14, which runs from 10 April to 17 September 2017, decided to stage, for the first time in the exhibitions history, one half in another European city: Athens. By doing so, he has mapped the field that best describes the dialectical tension in modern democracy today.

On one side is Kassels documenta: a post-fascist vehicle that believed in the transformative power of contemporary art. On the other side is Athens, the birthplace of democracy, which in recent years has become synonymous with the friction between democracy, national sovereignty and late capitalism.

In my lifetime, I have not experienced a more complex and greater existential crisis than today but a complex time can only be responded to in equally complex propositions. Documenta is a vehicle that affords the complex engagement with art and culture as what it is: a manifestation that responds to the sociopolitical conditions of our time. It is this that makes documenta, and particularly this documenta, so important, as it attempts to mediate between western democracy and capitalism in a state of crisis.

Stefan Kalmr is a veteran art industry and gallery insider and the new director of the Institute of Contemporary Arts, London.

Architecture

MAAT
MAAT in Lisbon, designed by AL_A. Photograph: Paulo Coelho/EDP Foundation

Centuries-old ideas show us how to define public space
By Amanda Levete

There has never been a more important time to find ways of bringing people together. We need public spaces in our cities and our buildings to unite people, spaces where everyone has the chance to gather and to celebrate what we have in common. Im hopeful that 2017 will see the flickering resurgence of outdoor civic spaces blossom into something more profound and lasting.

As citizens, we have perhaps been taking them for granted, but now we are actively recognising the roles played by these vital parts of the urban fabric, and demanding that our cities and institutions protect and expand them.

In 2017 and beyond, we will be seeing cultural projects as urban projects ones that engage with cities and their unrestrained, slightly messy, vibrancy. Id like to think that MAAT, a new museum we designed in Lisbon, where the roof is a new place for people to appropriate as they like, is just one example of many more to come. It is used by lingering couples enjoying the sunset over the Tagus, by kids who just want to run up and down the steps, and by runners, cyclists and skateboarders.

There is something visceral about physical interaction that people are coming to value even more with the rise of the digital. There will be a return to looking at Italian urban planning, such as the Nolli map of Rome that allowed us to see the open public spaces connecting a city, or the Piazza del Popolo of Todi, the citys spiritual, civic and cultural heart, where everyone contributes to the sense of community and has done so for generations.

Of course, these are centuries-old ideas and, in 2017, I hope there will be an increased humility in the architectural community in acknowledging our inspirations and inheritances as well as a renewal of that post-war idealism when architects thought architecture could help make a better society. Sometimes, looking back can be a more radical move than the advent of virtual or augmented reality but it is an approach that architects and cities will increasingly pursue.

Amanda Levete is a Stirling prize-winning British architect and the founder and principal of AL_A, whose new entrance and courtyard at the Victoria and Albert Museum, London, opens in 2017.

Space

The Starshot project and solar sail technology
By Maggie Aderin-Pocock

Ive been celebrating 50 years of Star Trek this year. I used to watch it as a child and thought, Oh yes, this is for me. I wanted warp drive, I wanted to travel to other planets and star systems. But as I grew older, I realised that our technology is so far from making interstellar travel possible until now.

Last April, the Starshot project was announced. It will use very high-powered lasers to accelerate solar sails on tiny spacecraft, sending them at a fifth of the speed of light to Alpha Centauri, our closest star system, in just 20 years. After the announcement, we discovered Proxima Centauri b, an Earth-like exoplanet orbiting Proxima Centauri itself. So it gets even more exciting. It pushes the technology we have at the moment to the limit, but the huge challenges are not insurmountable. I think we can do this, and work starts in 2017. We have a chance to take a closer look at an exoplanet, and perhaps even to find signs of life.

Solar sail technology will also allow us to study our solar system in far greater time. We sent the New Horizons probe to Pluto and it took almost 10 years. With solar sails, we could zip across the solar system in a matter of weeks and see whats out there.

As a child, I thought all this was possible, and when I started studying it, I reined in my expectations. But this year, for the first time, Im letting the dream continue. It is incredibly challenging, but when we look at what were achieving in miniaturisation and technology, I believe that in the next 15 to 20 years, we might have our first interstellar probe setting off for that 20-year journey to another star system. That puts it within my lifetime.

Maggie Aderin-Pocock is a space scientist and honorary research associate at the UCL department of physics and astronomy.

Developmental biology

A leap forward in embryo technology
By Dr Jim Smith

Science sometimes appears to advance in great leaps, but each of those leaps is usually based on years of painstaking and often unheralded work work based on nothing more than curiosity about how the world works. My area of research is developmental biology: the question of how the fertilised egg becomes an adult organism with all the right cell types in the right place. Of course, it had occurred to me that developmental biology research might one day have practical benefits, but this was not why I did it I did it because the problem is so intriguing.

But, as is often the case, this sort of discovery science is yielding extraordinary benefits. For example, the ability of developmental biologists to culture, manipulate and fertilise embryos in a petri dish, then to transfer the embryos to a mother, led to test-tube babies. And this year, thanks to pioneering work by Doug Turnbull, it has inspired the decision to allow doctors to apply for a licence to create three-person babies, thereby providing, for the first time, hope to mothers carrying mitochondrial disease.

Now we know so much about what happens during normal embryonic development, we are in the extraordinary position of being able to recapitulate it and even to reverse it. Doug Melton has shown how stem cells from patients with type 1 diabetes can be turned into pancreatic beta-cells; many researchers are making organoids, three-dimensional stem cell cultures that will allow the design of personal treatment regimes and generate new cells for gene editing and transplantation. Equally exciting is the recent discovery by Juan Carlos Izpisua Belmonte, inspired by his work on newt limb regeneration, that it may even be possible to reverse ageing.

As we understand more about development, now using techniques from chemistry, mathematics, engineering and physics, we can expect even more remarkable discoveries and treatments. This year, Magdalena Zernicka-Goetz managed to increase by 50% the length of time we can keep human embryos alive in a dish I cant wait to see what well learn about ourselves.

Jim Smith is a developmental biologist and the new director of science at Wellcome, the science and health foundation.

Read more: https://www.theguardian.com/news/2017/jan/02/big-ideas-2017-driverless-cars-interstellar-travel-invention

Technorati Tags: , ,

Breakthrough prize awards $25m to researchers at ‘Oscars of science’

Researchers in life sciences, fundamental physics and mathematics share awards from prize founders Yuri Milner, Mark Zuckerberg and Sergey Brin

It is not often that a scientist walks the red carpet at a Silicon Valley party and has Morgan Freeman award them millions of dollars while Alicia Keys performs on stage and other A-listers rub shoulders with Nasa astronauts.

But the guest list for the Breakthrough prize ceremony is intended to make it an occasion. At the fifth such event in California last night, a handful of the worlds top researchers left their labs behind for the limelight. Honoured for their work on black holes and string theory, DNA repair and rare diseases, and unfathomable modifications to Schrdingers equation, they went home to newly recharged bank accounts.

Founded by Yuri Milner, the billionaire tech investor, with Facebooks Mark Zuckerberg and Googles Sergey Brin, the Breakthrough prizes aim to right a perceived wrong: that scientists and engineers are not appreciated by society. With lucrative prizes and a lavish party dubbed the Oscars of science, Milner and his companions want to elevate scientists to rock star status.

The Silicon Valley backers paid out $25m in prizes at Sundays ceremony at Nasas Ames Research Center in California. It brought the total winnings for researchers in physics, life sciences and mathematics to $175m since the prizes were launched in 2012.

Huda Zoghbi, a Lebanese-born medical scientist at Baylor College of Medicine in Texas, was discussing her postdoctoral researchers latest data when a prize judge called to tell her she had won. Sworn to secrecy, Zoghbi asked her postdoc, Laura, to leave the room while she took the call. I was totally stunned, she said. After the call, I invited Laura back in to continue our meeting, but can you imagine trying to concentrate?

Zoghbis work is a masterclass in scientific investigation. In one branch of research, she set out to understand the genetic causes of a rare condition called spinocerebellar ataxia. She ran tests on families affected by the disorder and found that a mutation in a gene called SCA1 was the sole cause of the disease. She then bred mice with the same mutation so she could study the disorder as it progressed from first symptoms.

Tests on the mice revealed that when SCA1 was mutated, the protein the gene helps to make could not be cleared from the animals cells properly. And just as rubbish builds up in the house when the bins are not emptied, so levels of the protein, ataxin1, built up in mice with the mutation. These cells may have only 10 to 20% more protein, but that little bit extra is enough to wreak havoc in the brain cells, Zoghbi said.

Having teased out the mechanism underlying the disease, Zoghbi went on to find an enzyme that when suppressed caused ataxin1 levels to fall. Her team is now searching for drugs that can block the enzyme. If they find one, it could become a treatment for the devastating disease.

Spinocerebellar ataxia affects one in 100,000 people. But Zoghbis work on the condition, and on another called Rett syndrome, led her to study the most common neurodegenerative diseases, Parkinsons and Alzheimers. In both groups of patients, abnormal proteins build up in the brain and potentially kill off neurons. In her latest work, Zoghbi showed that blocking an enzyme called Nuak1 stopped a protein called tau building up in the brains of mice. High levels of tau have long been linked to Alzheimers disease. What we have is a potential druggable target for dementia, she said.

Zoghbi, who received one of the five Breakthrough prizes in life sciences, plans to set up a mentorship award; a fund to help young postdocs pursue their own ideas; and scholarships at her alma mater, the American University in Beirut.

The prizes may give scientists a glimpse of fame, but celebrity has little appeal, Zoghbi said. Material things and limelight are fleeting, they come and go. You could give me all the money in the world to do another job and I wouldnt do it, she said. I am working on something that will help people, and that reward is with you every day. She sees her colleagues as an extended family: her lab members call themselves Zoghbians.

Among the other awards handed out on Sunday was the Breakthrough prize in mathematics, won by Jean Bourgain at the Institute for Advanced Study in Princeton for work that ranges from extensions to Schrdingers equation, to the unification of maths itself. The Breakthrough prize in fundamental physics was shared by three academics for work on string theory and black holes. Joe Polchinski at the University of California, Santa Barbara, who has studied the baffling question of what happens to information that tumbles into black holes, plans to use the winnings for the betterment of science, but said he was terrified at what the next US administration might mean for research.

Morgan Freeman was invited to host Sundays ceremony, where others on the guest list included Alex A-Rod Rodriguez, the former Nasa astronauts Mark and Scott Kelly, will.i.am, and Bryce Dallas Howard, who as Claire Dearing in Jurassic World justified the creation of the troublesome Indominus rex with the line: We needed something scary and easy to pronounce. The celebrities, however, might find they are as unknown to the scientists as the scientists are to the them. My nieces and nephews will know more about them then I do, said Polchinski.

Another life sciences prize winner on Sunday was Stephen Elledge, a geneticist at Harvard Medical School. I wasnt expecting it, he told the Guardian. What can you say when someone tells you they are going to give you $3m? Im not used to that, I can tell you.

Elledge discovered how cells respond to DNA damage. The mechanism can kill off the most tattered cells and put others into a state of suspended animation called senescence. The process prevents cancer by shutting down abnormal cells, but senescence also triggers inflammation that drives ageing. Elledge is now looking for ways to turn off the inflammation, or wipe out senescent cells completely. That could impact all kinds of diseases in the ageing population, he said.

He is still working out what to do with his winnings, but one hope is to set up scholarships for disadvantaged kids from his hometown of Paris, Illinois. He also wants to support institutions that could come under pressure in the next administration. Now that the political terrain has shifted in the US there are going to be a lot more places that will need help, he said. In the US there is pressure against science. People deny the validity of science and facts. These are dark days. And as scientists we have to push back. We have to stand up to the challenge.

Read more: https://www.theguardian.com/science/2016/dec/05/breakthrough-prize-awards-2016-25m-to-researchers-at-oscars-of-science

Technorati Tags: , , , , , , , , ,