An interview with young researcher Olaitan Awe at the 5th Heidelberg Laureate Forum, September 2017.
An interview with young researcher Abdulfatai Atte Momoh at the 5th Heidelberg Laureate Forum, September 2017.
An interview with young researcher Fatimah Abdul Razak at the 5th Heidelberg Laureate Forum, September 2017.
An interview with young researcher Larwan Burke at the 5th Heidelberg Laureate Forum, September 2017.
Ah, Christmas! A time to gather ’round the Christmas bear and thank God above for the bounty of bees.
Yesterday I got an email that said all my bitcoin was about to disappear. Hurricane Irma, it claimed, had damaged the servers of a company where my bitcoin was stored. There was a backup, but it would disappear soon, so I needed to move my money to the address provided.
Was it a scam? Yes, but I only knew that by getting geeky with the email headers. (Nerdnote: It originated in the Tor network and routed through a mail server in an offsite country.) But it was perhaps the easiest-to-believe email scam I’ve seen in my thirty years online. Why?
Because bitcoin sometimes just disappears.
This needs some explaining. When someone sends bitcoin, the transaction is recorded on a worldwide ledger. Practically speaking, it’s permanent: Nobody can go back to reverse the charges. That’s different from checks and credit cards, and the key to bitcoin’s excellent technical security.
(I talk about this in “How Bitcoin works”, a video from my Learning Bitcoin course. LinkedIn has allowed me to make that video available for free here.)
(Jonathan Reichental goes deeper in his course Blockchain Basics.)
But “secure” doesn’t always mean “safe”: Bitcoin transactions are also irreversible when you accidentally lose bitcoins. And that happens a lot.
I know this from personal experience, when my phone got wiped and the digital backup of my bitcoin wallet failed. (I fortunately also had a paper backup, an extra step few bother with.) People have lost bitcoin by upgrading a computer, losing control of a phone number, tossing old electronic junk, and dozens of other ways. Digital assets are incredibly fragile.
If you don’t want to access bitcoin through your phone, the other option is an “online wallet”. But such a wallet is only as good as the company protecting it. After the company Mt. Gox made over a half billion dollars of bitcoin disappear, online wallets don’t seem like a good idea, either.
Billions of dollars of bitcoin have simply vanished and will never be recovered.
So when I got that email, it seemed not only plausible that my bitcoin was at risk, but likely. A company having bad backup procedures? That’s the rule, not the exception. A need for action to “save” my bitcoin? Sure — consider this somewhat confusing blog post from Coinbase when a spinoff currency (“fork”) happened this summer. I need to shuffle money into a new wallet? It won’t be the first time.
So what can you do?
- If you’re holding bitcoin on a computer or device, back up your wallet and test the backup periodically by restoring it in another location. A backup that doesn’t work isn’t a backup.
- Store a minimal amount in online wallets. I know, companies such as Coinbase claim that your deposits are fully insured — and as I know and respect that company, I believe it. But anyone can say that, including crooks. And Coinbase provided no details when asked. Without those, “trust me” doesn’t fly.
- Keep a written record of where you’ve stored your bitcoin, and “touch” it once in a while. Does that flash drive still work? Is your online wallet still in business? Has a phone upgrade made your bitcoin inaccessible?
- Follow all the usual best practices for online security: Use good passwords, confirm questionable activities, and so on.
As for what the industry can do…
Well, not much. At least, there’s no reason to change bitcoin itself, as its irreversibility is as much its strength as its weakness. I predicted a growth of secondary bitcoin services such as fund-clearing and insurance four years ago; some have appeared to ameliorate bitcoin’s fragility, but I think more needs to be done in this area.
Bitcoin’s instability is bad for its users. But it’s also a business opportunity; whoever can solve it will reap great rewards.
Tom Geller is the author/presenter of the video course Learning Bitcoin and several others available through Lynda.com and LinkedIn Learning. He’s at tomgeller.com and tgprods.com, and on Twitter as tgeller.
I got the idea from “Almost a Brain” after learning that the world’s top supercomputers are nearing the level of raw power in our brains — about a billion billion (1,000,000,000,000,000,000) operations per second (an “exaflop”). I’ve since come to recognize that there’s a lot more to brain modeling than power, but supercomputing still has a special place in my heart.
It’s a topic I first covered in 2011, when the fastest computer could reach achieve about 1/120th an exaflop. I wrote a few more articles on the subject, then created the following four-minute video. It features Daniel Reed, who’s a pretty interesting guy: He’s both a computer science professor and a college Vice President at the University of Iowa. (And he helped create the first web browser in the early ’90s!)
Somewhat related to supercomputing is quantum computing, which has the potential for much more power (in some respects) than traditional supercomputers can offer. University of Arizona Professor Stuart Hameroff has put forth some intriguing theories about quantum computing in the human brain: I look forward to digging deep for his Almost a Brain interview.
But for a bit of background in the meantime, enjoy this video I did with Professor Benoît Valiron about how to program quantum computers.
(This post is the first in a series of four. Soon to come: insights from research in artificial intelligence, human behavior modeling, and computational biology.)
Artificial intelligence and supercomputers provide the power. What happens next?
ROTTERDAM, THE NETHERLANDS, May 12, 2017 — 100 billion neurons. 100,000 billion connections. A billion billion operations per second. The numbers are incomprehensible. This is your brain. And soon, computers will be able to mimic its operations on the most fundamental levels.
What happens then? Will these new brains understand themselves as we do? Will they feel? What aspects of human thought will remain the province of humans alone? How will we revise our ideas of humanity?
These are questions the documentary “Almost a Brain” will explore through archival research and original interviews with neurologists, computer scientists, and leaders in philosophy. The project is led by Tom Geller, a technology journalist who has produced videos and articles on related topics for The Association for Computing Machinery (ACM), Nature.com, and others.
“Computer models of the human brain are already sophisticated enough to help figure out and treat disorders such as epilepsy,” Geller said. “Now, projects like the Human Brain Project in Europe and the BRAIN Initiative in the United States are filling in the gaps. I believe it’s only a matter of time before something resembling ‘thought’ emerges from such models, whether unexpectedly or through concerted efforts. How it differs from that of biological humans, and how we react to it, will fundamentally change how we see ourselves.”
With bases in The Netherlands (Rotterdam) and the U.S. (Oberlin, Ohio), Tom Geller Productions has secured interview commitments with experts including AI pioneer Eric Horvitz, Microsoft Technical Fellow and Director of Microsoft Research Labs; and Professor Jack Dongarra, who tracks the world’s fastest computers through the semi-annual TOP500 reports.
To participate or learn more, visit almostabrain.com.
Originally published at https://www.linkedin.com/pulse/documentary-almost-brain-explore-how-computers-nearing-tom-geller
And why they won’t be featured in “Almost a Brain”
Last Friday I announced my upcoming documentary about computers that model the human brain, “Almost a Brain“. And I’ve started talking about it to everyone I can. I practice my pitch on them while watching their faces for signs of interest, skepticism, and outrage. This is market research; it’ll affect what the documentary covers, and how.
But there’s one reaction that I’m basically going to ignore: “When computers are smarter than us, won’t they take over?” It’s an old fear that’s gotten a lot of attention in the last few years because of advances in artificial intelligence (AI) — and the concerns of famous “smart people” including Stephen Hawking and Elon Musk. It’s a tale perfect for the anxiety-addicted U.S. media, featuring celebrities, strong opinions, and imminent danger.
And I don’t care.
Well, that’s not completely true. I care in the sense that I believe that any new, powerful tool demands caution and respect. We can expect a period of unbounded possibility and lawlessness, followed by a settling down as society decides what costs are worth the rewards. This consensus is never pretty: Consider the million-plus deaths per year we accept in exchange for the benefits we get from driving cars, for example. It’s wise to start the discussion now.
For the purposes of my documentary, though, the apocalypse argument isn’t interesting. First, because it’s unlikely, at least in the general sense described in the media. Second, it’s already well-covered: I have nothing new to add. Third, and most importantly, it’s a technological discussion of something that’s ultimately a human matter. The doomsday scenarios require human cooperation to build, spread, and apply the technology, in the face of (human) opposition. That’s a much bigger nut to crack than the technical ones.
These human matters also lead to more interesting questions. Such as: What is human thought? Will we recognize it when we see it? How will we then differentiate ourselves from our creations?
These questions are at least as old as biblical stories of golems. They have new importance now, as supercomputers approach the raw processing power of the human brain; neurologists can map and better understand the relationship between brain and thought; and artificial intelligence opens new windows into how we learn, and ultimately create. So the stimulus to make this documentary now is technological; its motivation, however, is human.
That’s why I’m actively pursuing sources in the areas of philosophy and human neurology for the documentary — areas outside my own field of computer science. As Wavy Gravy often says, “It’s all done with people.” So Almost a Brain is ultimately about people — old and new.
Originally published at https://www.linkedin.com/pulse/why-killer-robots-dont-worry-me-tom-geller
Two professors at the University of Bristol discuss how to apply artificial intelligence to improve the peer-review process for journals and conferences.
An interview with Frits Vaandrager of Radboud University (The Netherlands), on a system that probes unknown systems to figure out their inner logic.
A promotional video about the Heidelberg Laureate Forum, a week-long meeting of 200 advanced young researchers in math and computer science, together with about two dozen “laureates” who have won the world’s top prizes in those topics (Turing Award, Abel Prize, Fields Medal, and Nevanlinna Prize). Commissioned by the Association for Computing Machinery, which sponsors the Turing Award.
Matei Zaharia talks about his creation Apache Spark, a modular platform for performing calculations on big data.
I was just a plain-old PHP, JS, HTML designer-developer, until I was introduced to Drupal 7 from your videos. They got me up and running quickly, and so I thank you for that!
An online report on the Heidelberg Laureate Forum, which gathers recipients of the world’s most prestigious math and computer science awards (Turing, Fields, Abel, Nevanlinna) with 200 advanced young researchers. Includes two original photos.
I enjoyed the b-roll you took. Also happy you balanced the message — as we say in the paper, it is easy just to hype the threat.
Vanderbilt University Dean M. Eric Johnson reviews how medical device security issues, including those in their software, have threatened health in the past, and summarizes the current state of affairs.
On-site report from Las Vegas of the “DARPA Cyber Grand Challenge”, a multi-million dollar, U.S. government-sponsored competition where computers try to hack each other. Covered the action and conducted interviews over three days. Here’s the article I wrote about this event.
An interview with University of Pennsylvania Professor Susan Davidson about the need to change how source information is cited in (for example) academic papers. She and her co-authors propose a framework that allows a greater diversity of sources and more flexibility in citing them.
Coverage of DARPA’s Cyber Grand Challenge, a competition in Las Vegas where seven teams trained computers to hack each other for over three million dollars in prizes. Here’s the video I produced at this event.
An interview with ACM Fellow and MIT Professor Bonnie Berger about how to improve computer handling of biological data, specifically relating to genomes.