Hallie Siegel writes "Robotics expert Alan Winfield offers a sobering counterpoint to Ray Kurzweil's recent claim that 2029 will be the year that robots will surpass humans. From the article: 'It’s not just that building robots as smart as humans is a very hard problem. We have only recently started to understand how hard it is well enough to know that whole new theories ... will be needed, as well as new engineering paradigms. Even if we had solved these problems and a present day Noonian Soong had already built a robot with the potential for human equivalent intelligence – it still might not have enough time to develop adult-equivalent intelligence by 2029'"
SlashBI: Your dashboard for the latest in business-intelligence news and analysis.
The latest title in the stealth game series Thief launched in North America yesterday for the PS3, PS4, Xbox 360, Xbox One, and Windows. Reviews of the game are mixed. Rock, Paper, Shotgun's John Walker says that the story is poor, but "it matters very little, since it's only there as an excuse to link epic, intricate and hugely enjoyable levels together." He also laments the loss of a dedicated "Jump" button, noting that veterans of the series will miss it. "There are far too often obstacles that a toddler could easily scale, but Garrett won't even try, and his refusing to jump certain gaps in order to force a challenge is maddening." Polygon's review says navigating the game's open environments was fun, but "In the latter half of the game, when a glimpse of that openness was dangled in front of me once again, Thief snatched it away with murderous AI and controls that didn't feel up to the challenge." They add, "a new obsession with scripted story sequences and stealth action often leaves Thief feeling like the worst of both worlds." Giant Bomb's review is brutal, saying Thief is "a game that spends an inordinate amount of time making the player do uninteresting things while shoving the more fun stuff so far in the corner you'd be forgiven for missing most of it."
Nerval's Lobster writes "Ray Kurzweil, the technologist who's spent his career advocating the Singularity, discussed his current work as a director of engineering at Google with The Guardian. Google has big plans in the artificial-intelligence arena. It recently acquired DeepMind, self-billed 'cutting edge artificial intelligence company' for $400 million; that's in addition to snatching up all sorts of startups and research scientists devoted to everything from robotics to machine learning. Thanks to the massive datasets generated by the world's largest online search engine (and the infrastructure allowing that engine to run), those scientists could have enough information and computing power at their disposal to create networked devices capable of human-like thought. Kurzweil, having studied artificial intelligence for decades, is at the forefront of this in-house effort. In his interview with The Guardian, he couldn't resist throwing some jabs at other nascent artificial intelligence systems on the market, most notably IBM's Watson: 'IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading.' That sounds very practical, but at a certain point Kurzweil's predictions veer into what most people would consider science fiction. He believes, for example, that a significant portion of people alive today could end up living forever, thanks to the ministrations of ultra-intelligent computers and beyond-cutting-edge medical technology."
An anonymous reader writes "Corporate employees editing Wikipedia articles about themselves or their employers sometimes commit major violations of Wikipedia's "bright line" against paid editing, devised by Jimbo Wales himself, to prevent 'COI' editing. (Consider the recent flap over the firm Wiki-PR's activities, for example.) Yet the Wikipediocracy website, run by critics of Wikipedia management, has just published an article about IBM employees editing Wikipedia articles. Not only is such editing apparently commonplace, it's being badly done as well. And most bizarrely, one of the IBM employees is a Wikipedia administrator, who is married to another Wikipedia administrator. She works on the Watson project, which uses online databases to build its AI system....including the full text of Wikipedia." Reading about edit wars is also far more informative (if less entertaining) than reading the edit wars themselves.
malachiorion writes "I'm surprised I haven't seen more coverage of Lockheed Martin's autonomous truck convoy demonstration — they sent a group of robotified vehicles through urban and rural environments at Fort Hood, without teleoperation or human intervention. It's an interesting milestone, and sort of a tragic one, since troops could have used robotic vehicles in Iraq and Afghanistan. What's fascinating, though, is that Lockheed is hoping to get into Afghanistan just before the U.S. withdraws, to help ferry gear. Plus, they have their sights set on what would be the defense contractor's first real commercial product—kits that turn tractor trailers into autonomous vehicles. Here's my post for Popular Science."
v3rgEz writes "A new startup out of MIT offers early adopters a chance at the afterlife, of sorts: It promises to build an AI representation of the dearly departed based on chat logs, email, Facebook, and other digital exhaust generated over the years. "Eterni.me generates a virtual YOU, an avatar that emulates your personality and can interact with, and offer information and advice to your family and friends after you pass away," the team promises. But can a chat bot plus big data really produce anything beyond a creepy, awkward facsimile?"
Nerval's Lobster writes "Researchers in Germany have discovered what they say is a way to get computers to do more than execute all the steps of a problem-solving calculation as fast as possible – by getting them to imitate the human brain's habit of finding shortcuts to the right answer. A team of scientists from Freie Universität Berlin, the Bernstein Center Berlin, and Heidelberg University have refined the idea of parallel computing into one they describe as neuromorphic computing. In their design, a whole series of processors designed as silicon neurons rather than ordinary CPUs are linked together in a network similar to the highly interconnected mesh that links nerve cells in the human brain. Problems fed into the neuro mesh are broken up and processed in parallel, but not always using the same process. The method by which neuromorphic processors handle problems varies with the way they're linked together, as is the case with neurons in the brain. The chips are designed to copy the layout and functions of brain cells, but the way they're interconnected is based on another highly efficient biological model. 'The design of the network architecture has been inspired by the odor-processing nervous system of insects,' said one of the researchers. 'This system is optimized by nature for a highly parallel processing of the complex chemical world.' In tests using real-world datasets, the prototype was able to match the performance of specialized Bayeseian pattern-matching systems. Even better, the stable decisions reached by 'output neuron populations' take approximately 100 milliseconds, which is the same speed required by the insect nervous systems on which the network design is based, according to the paper."
theodp writes "Weighing in for the WSJ on Spike Jonze's Oscar-nominated, futuristic love story Her (parodies), Stephen Wolfram — whose Wolfram Alpha drives the AI-like component of Siri — thinks that an operating system like Samantha as depicted in the film isn't that far off. In Her, OS Samantha and BeautifulHandwrittenLetters.com employee Theodore Twombly have a relationship that appears to exhibit all the elements of a typical romance, despite the OS's lack of a physical body. They talk late into the night, relax on the beach, and even double date with friends. Both Wolfram and Google director of research Peter Norvig (who hadn't yet seen the film) believe this type of emotional attachment isn't a big hurdle to clear. 'People are only too keen, I think, to anthropomorphize things around them,' explained Wolfram. 'Whether they're stuffed animals, Tamagotchi, things in videogames, whatever else.' By the way, why no supporting actor nomination for Jonze's portrayal of foul-mouthed animated video game character Alien Child?"
TechCrunch reports that Google has acquired London-based artificial intelligence firm Deep Mind. TechCrunch notes that the purchase price, as reported by The Information, was somewhere north of $500 million, while a report at PC World puts the purchase price lower, at mere $400 million. Whatever the price, the acquisition means that Google has beaten out Facebook, which reportedly was also interested in Deep Mind. Exactly what the startup will bring to Google isn't clear, though it seems to fit well with the emphasis on AI that the company underscored with its hiring of futurist Ray Kurzweil: "DeepMind's site currently only has a landing page, which says that it is 'a cutting edge artificial intelligence company' to build general-purpose learning algorithms for simulations, e-commerce, and games. As of December, the startup had about 75 employees, reports The Information. In 2012, Carnegie Mellon professor Larry Wasserman wrote that the 'startup is trying to build a system that thinks. This was the original dream of AI. As Shane [Legg] explained to me, there has been huge progress in both neuroscience and ML and their goal is to bring these things together. I thought it sounded crazy until he told me the list of famous billionaires who have invested in the company.'"
mask.of.sanity writes "Life could become more difficult for fraudsters on Skype thanks to new research by Microsoft boffins that promises to cut down on fake accounts across the platform. The research (PDF) combined information from diverse sources including a user's profile, activities, and social connections into a supervised machine learning environment that could automate the presently manual tasks of fraud detection. The results show the framework boosted fraud detection rates for particular account types by 68 per cent with a 5 per cent false positive rate."
Zothecula writes "Scientists at Brigham Young University (BYU) have developed an algorithm that can accurately identify objects in images or videos and can learn to recognize new objects on its own. Although other object recognition systems exist, the Evolution-Constructed Features algorithm is notable in that it decides for itself what features of an object are significant for identifying the object and is able to learn new objects without human intervention."
An anonymous reader writes "Gamer rage is a common phenomenon among people who play online, a product of the intense frustration created by stressful in-game situations and an inability to cope. It can have significant impact on the gamer's ability to play well, and to get along with others. To combat this rage and train gamers to deal with the stress, visual designer Samuel Matson of Seattle has created the Immersion project, integrating a pulse sensor tied to a Tiny Arduino with Bluetooth into a headset to monitor the gamer's heart rate. The heart rate data is sent in real time to the gaming PC, where it is displayed in the game. Matson even created a simple FPS using the Unity game engine that varies the AI and gaming difficulty based on the user's heart rate. Using this system, the gamer is able to train themselves to recognize the stress and learn to control it, in order to make them a much more agreeable and competitive player."
Hugh Pickens DOT Com writes "Tom Friedman begins his latest op-ed in the NYT with an anecdote about Dutch chess grandmaster Jan Hein Donner who, when asked how he'd prepare for a chess match against a computer, replied: 'I would bring a hammer.' Donner isn't alone in fantasizing that he'd like to smash some recent advances in software and automation like self-driving cars, robotic factories, and artificially intelligent reservationists says Friedman because they are 'not only replacing blue-collar jobs at a faster rate, but now also white-collar skills, even grandmasters!' In the First Machine Age (The Industrial Revolution) each successive invention delivered more and more power but they all required humans to make decisions about them. ... Labor and machines were complementary. Friedman says that we are now entering the 'Second Machine Age' where we are beginning to automate cognitive tasks because in many cases today artificially intelligent machines can make better decisions than humans. 'We're having the automation and the job destruction,' says MIT's Erik Brynjolfsson. 'We're not having the creation at the same pace. There's no guarantee that we'll be able to find these new jobs. It may be that machines are better than that.' Put all the recent advances together says Friedman, and you can see that our generation will have more power to improve (or destroy) the world than any before, relying on fewer people and more technology. 'But it also means that we need to rethink deeply our social contracts, because labor is so important to a person's identity and dignity and to societal stability.' 'We've got a lot of rethinking to do,' concludes Friedman, 'because we're not only in a recession-induced employment slump. We're in technological hurricane reshaping the workplace.'"
mikejuk writes "A recent xkcd strip has started some deep academic thinking. When AI expert Peter Norvig gets involved you know the algorithms are going to fly. Code Golf is a reasonably well known sport of trying to write an algorithm in the shortest possible code. Regex Golf is similar, but in general the aim is to create a regular expression that accepts the strings in one list and rejects the strings in a second list. This started Peter Norvig, the well-known computer scientist and director of research at Google, thinking about the problem. Is it possible to write a program that would create a regular expression to solve the xkcd problem? The result is an NP hard problem that needs AI-like techniques to get an approximate answer. To find out more, read the complete description, including Python code, on Peter Norvig's blog. It ends with this challenge: 'I hope you found this interesting, and perhaps you can find ways to improve my algorithm, or more interesting lists to apply it to. I found it was fun to play with, and I hope this page gives you an idea of how to address problems like this.'"
innocent_white_lamb writes "Current laws make the driver of a car responsible for any mayhem caused by that vehicle. But what happens when there is no driver? This article argues that the dream of a self-driving car is futile since the law requires that the driver is responsible for the operation of the vehicle. Therefore, even if a car is self-driving, you as the driver must stay alert and pay attention. No texting, no reading, no snoozing. So what's the point of a self-driving car if you can't relax or do something else while 'driving?'"
MojoKid writes "Over the past few years, short game writing 'jams' have become a popular way to bring developers together in a conference with a single overarching theme. These competitions are typically 24-48 hours long and involve a great deal of caffeine, frantic coding, and creative design. The 28th Ludum Dare conference held from December 13 — 16 of this past year was one such game jam — but in this case, it had an unusual participant: Angelina. Angelina is a computer AI designed by Mike Cook of Goldsmiths, London University. His long-term goal is to discover whether an AI can complete tasks that are generally perceived as creative. The long-term goal is to create an AI that can 'design meaningful, intelligent and enjoyable games completely autonomously.' Angelina's entry into Ludum Dare, dubbed 'To That Sect'" is a simple 3D title that looks like it hails from the Wolfenstein era. Angelina's initial game is simple, but in reality Angelina is an AI that can understand the use of metaphor and build thematically appropriate content, which is pretty impressive. As future versions of the AI improve, the end result could be an artificial intelligence that 'understands' human storytelling in a way no species on Earth can match."
First time accepted submitter hrb1979 writes "Thought I'd share an interview with Kang Zhao — the professor behind the machine learning algorithm which could transform online dating. His algorithm takes into account both a user's tastes (in an approach similar to the Netflix recommendation engine) and their attractiveness (by analyzing how many responses they get) — enabling the machine to 'learn' and hence propose higher potential matches. His research was recently covered in both a Forbes' article and the MIT Technology Review, though this interview provides more depth and color."
KentuckyFC writes "A curious thing about video games is that computers have never been very good at playing them like humans by simply looking at a monitor and judging actions accordingly. Sure, they're pretty good if they have direct access to the program itself, but 'hand-to-eye-co-ordination' has never been their thing. Now our superiority in this area is coming to an end. A team of AI specialists in London have created a neural network that learns to play games simply by looking at the RGB output from the console. They've tested it successfully on a number of games from the legendary Atari 2600 system from the 1980s. The method is relatively straightforward. To simplify the visual part of the problem, the system down-samples the Atari's 128-colour, 210x160 pixel image to create an 84x84 grayscale version. Then it simply practices repeatedly to learn what to do. That's time-consuming, but fairly simple since at any instant in time during a game, a player can choose from a finite set actions that the game allows: move to the left, move to the right, fire and so on. So the task for any player — human or otherwise — is to choose an action at each point in the game that maximizes the eventual score. The researchers say that after learning Atari classics such as Breakout and Pong, the neural net can then thrash expert human players. However, the neural net still struggles to match average human performance in games such as Seaquest, Q*bert and, most importantly, Space Invaders. So there's hope for us yet... just not for very much longer."
New submitter mni12 writes "I have been working on a Bayesian Morse decoder for a while. My goal is to have a CW decoder that adapts well to different ham radio operators' rhythm, sudden speed changes, signal fluctuations, interference, and noise — and has the ability to decode Morse code accurately. While this problem is not as complex as speaker-independent speech recognition, there is still a lot of human variation where machine learning algorithms such as Bayesian probabilistic methods can help. I posted a first alpha release yesterday, and despite all the bugs one first brave ham reported success. I would like to collect thousands of audio samples (WAV files) of real world CW traffic captured by hams via some sort of online system that would allow hams not only to upload captured files but also provide relevant details such as their callsign, date & time, frequency, radio / antenna used, software version, comments etc. I would then use these audio files to build a test library for automated tests to improve the Bayesian decoder performance. Since my focus is on improving the decoder and not starting to build a digital audio archive service I would like to get suggestions of any open source (free) software packages, online services, or any other ideas on how to effectively collect large number of audio files and without putting much burden on alpha / beta testers to submit their audio captures. Many available services require registration and don't support metadata or aggregation of submissions. Thanks in advance for your suggestions."
An anonymous reader writes "Since the first demonstration of the plausible future abilities of Google Glass, instant facial recognition has been one of the most exciting ideas in the pipeline. According the the development group Facial Network, the time for real-time facial recognition through Google Glass is coming a lot sooner than we originally expected. This isn't an app developed by Google, it's a 3rd party developer group — they've gone and done it first!" The application is not on the Play store due to the ban on facial recognition. It performs real time recognition, and pulls information from public databases. The authors intend to allow people to opt-out of the recognition database.