Wikidata is operated by the Wikimedia Foundation and its fact database is published under a Creative Commons 0 public domain dedication. Funding of Wikidata's initial development was provided by the Allen Institute for Artificial Intelligence [AI]², the Gordon and Betty Moore Foundation, and Google, Inc.
BY JOHN COOK on May 12, 2015 at 1:12 pm
The Allen Institute for Artificial Intelligence has established a new startup incubator at its offices in Seattle, recruiting two high-level researchers who will try to develop technologies in the emerging field.
Joining the Institute’s new incubator program are Prismatic co-founder Aria Haghighi and Johns Hopkins University PhD graduate Xuchen Yao.
“We are quickly building an element of the Seattle tech ecosystem, and we’ve identified cutting-edge folks who are startup minded,” said Oren Etzioni, the former University of Washington computer science professor who now leads the Allen Institute for Artificial Intelligence. “Once we identify super-talented folks like Xuchen and Aria — we give them a lot of freedom to pursue their instincts and initiatives.”
Aria HaghighiBankrolled by Microsoft co-founder Paul Allen and known as AI2, the Institute was formed 16 months agoto create a new wave of research in the field of artificial intelligence that goes beyond well-known technologies like Siri and Watson.
Among the projects being developed at AI2 is Aristo, described as a “first step towards a machine that contains large amounts of knowledge in machine-computable form that can answer questions, explain those answers, and discuss those answers with users.”
Etzioni calls Haghighi and Yao “anchor tenants” of the incubator, which at this stage is small and is not actively accepting other entrepreneurs.
“Our incubator focuses on the very best technical talent in AI whose work dovetails with the research at AI2,” said Etzioni, adding that there’s “excellent potential for synergy with the technologies” being developed at AI2.
Xuchen YaoYao, a former intern at Google and Paul Allen’s Vulcan, is developing a technology known as KITT.ai, which among other things is designed to turn natural language into computer codes with potential applications in home automation and the Internet of Things.
Haghighi, who has relocated to Seattle, spent three years building San Francisco-based news discovery service Prismatic. In March,TechCrunch reported that Microsoft was interested in buying the company, which had raised about $15 million in venture capital. Haghighi left Prismatic in 2013, joining Apple where he worked on the company’s recently launched Apple Watch.
His LinkedIn bio simply says: “Collaborating with the awesome team at AI2, while I tinker on ideas for a new venture.”
At least that is what Raj Singh, the founder and CEO of Menlo Park, Calif.-based Tempo AI, is seeing as he works to build a smart calendar that automatically manages all aspects of your work and personal life.
Tempo AI’s Co-founder and CEO Raj SinghDuring a recent visit to Seattle, Singh explained to me just how fast the sector was evolving.
Google, Facebook, Microsoft, Amazon are all actively acquiring talent and technology to advance the personal assistance space beyond what Singh calls the “participatory” nature of it today, to become “anticipatory.”
Here’s just a small taste for the big moves that are being made:
Artificial intelligence is a broad term that covers a number of technologies, including speech recognition, image recognition and natural language processing.
Singh got his start in the category as an entrepreneur in residence at SRI International, the nonprofit research lab that is famous for spinning off what eventually became Apple’s Siri. In December 2012, Singh founded Tempo AI using some of the research lab’s technology and launched the calendar app early last year. The 16-employee company has raised $12.5 million in capital from Qualcomm Ventures, Sierra Ventures, Relay Ventures and others.
He said Apple’s Siri is a “first-generation assistant,” but in the future, you won’t have to ask the assistant for anything — it will anticipate what you need, more similar to what Google is doing with Google Now. “Why do you even have to ask? Google Now is anticipating and is pushing you the right notification or piece of information,” he said.
In order to pull that off, Singh says you need the data. There’s hundreds of sources to pull from, but only a few can determine intent. Google has your search history, so if you searched for driving directions, it may push you traffic information. But another source of valuable data is purchase history. Companies, like American Express or Amazon, are good sources for that, he said.
At Tempo AI, context comes from your calendar. The calendar knows that you are going on a work trip in June and to Cancun in July. “When you look at your calendar, it’s a good indicator of where you spend your time, and what’s important to you,” he said. “It’s the only app on the phone that can tell your future.”
The Tempo calendar, which is available on iPhone for free, pulls in data from around 30 data sources, including Foursquare, Yelp, Dropbox, Facebook, Google+, Klout, Flickr and LinkedIn. Within the calendar, it will give you more information about the person you are meeting with; provide driving directions and parking instructions, and post your flight status. It will also dial you into conference calls without the hassle of memorizing passcodes.
From there, Singh says they have lots of different ways to go with the app. He said imagine your calendar recommending people you should meet based on other people who are meeting the same people you are. Tempo is also learning interesting information, like which Starbucks will be the most popular next week based on appointments people have made in the future.
While artificial intelligence is a fancy name for it, Singh said it is the craft of bringing together a lot of data sources in a meaningful way: “People use a lot of different tools, and they are scattered across the cloud,” he said. “We are trying to help them be better prepared for their day.”
BY TAYLOR SOPER on December 3, 2014 at 10:58 am
The Paul G. Allen Family Foundationtoday awarded $5.7 million to seven researchers in the artificial intelligence field as part of the most recent Allen Distinguished Investigator (ADI) Program grant.
The researchers, who are working on machine reading, diagram interpretation and reasoning, and spatial and temporal reasoning, hail from four universities around the globe — four of them work at the University of Washington.
“The Allen Distinguished Investigator program has become a platform for scientists and researchers to push the boundaries on the conventional and test the limits of how we think about our existence and the world as we know it,” Dune Ives, co-manager of The Paul G. Allen Family Foundation, said in a statement. “We are only beginning to grasp how deep intelligence works. We hope these grants serve as a valuable catalyst for one day making artificial intelligence a reality.”
[Related: The next battleground for Amazon, Microsoft, Facebook and Google: Artificial Intelligence]
The ADI program started in 2010 and this marks the first commitment to researchers in the artificial intelligence field. The focus on AI topics for 2014 is related to the vision of the new Allen Institute for AI, a multi-million dollar effort created by Allen and led by CEO Oren Etzioni that could have huge implications for the region’s tech industry and, more importantly, society as a whole. Etzioni, a former UW computer science professor and veteran entrepreneur, began work at the institute in September 2013.
However, the ADI Program is a distinct from the new Allen Institute for AI and is fully funded and operated by Allen’s foundation.
Here are the recipients, with descriptions from the foundation:
Devi Parikh, Virginia Tech
The vast majority of human interaction with the world is guided by common sense. We use common sense to understand objects in our visual world – such as birds flying and balls moving after being kicked. How do we impart this common sense to machines? Machines today cannot learn common sense directly from the visual world because they cannot accurately perform detailed visual recognition in images and video. In this project, Parikh proposes to simplify the visual world for machines by leveraging abstract scenes to teach machines common sense.
Maneesh Agrawala, University of California and Jeffrey Heer, University of Washington
For hundreds of years, humans have communicated through visualizations. While the world has changed, we continue to communicate complex ideas and tell stories through visuals. Today, charts and graphs are ubiquitous forms of graphics, appearing in scientific papers, textbooks, reports, news articles and webpages. While people can easily interpret data from charts and graphs, machines do not have the same ability. Agrawala and Heer will develop computational models for interpreting these visualizations and diagrams. Once machines are better able to “read” these diagrams, they can extract useful data and relationships to drive improved information applications.
Sebastian Riedel, University College London
Machines have two ways to store knowledge and reason with it. The first is logic – using symbols and rules, and the second is vectors – sequences of real numbers. Both logic and vectors have benefits and limitations. Logic is very expressive, and a good tool to prove statements. Vectors are highly scalable. Riedel will investigate an approach where machines convert symbolic knowledge, read from text and other sources, into vector form, and then approximate the behavior of logic through algebraic operations. Ultimately, this approach will enable machines to pass high-school science exams or perform automatic fact checking.
Ali Farhadi, University of Washington and Hannaneh Hajishirzi, University of Washington
Farhadi and Hajishirzi’s project seeks to teach computers to interpret diagrams the same way children are taught in school. Diagram understanding is an essential skill for children since textbooks and exam questions use diagrams to convey important information that is otherwise difficult to convey in text. Children gradually learn to interpret diagrams and extend their knowledge and reasoning skills as they proceed to higher grades. For computers, diagram interpretation is an essential element in automatically understanding textbooks and answering science questions. The cornerstone of this project is its Spoon Feed Learning framework (SPEL), which marries principles of child education and machine learning. SPEL gradually learns diagrammatic and relevant real-world knowledge from textbooks (starting from pre-school) and uses what it’s learned at each grade to learn and collect new knowledge in the next, more complex grade. SPEL takes advantage of coupling automatic visual identification, textual alignment, and reasoning across different levels of complexity.
Luke Zettlemoyer, University of Washington
The vast majority of knowledge and information we as humans have accumulated is in text form. Computers currently are not able to figure out how to translate that data into action. Zettlemoyer is building a new class of semantic parsing algorithms for the extraction of scientific knowledge in STEM domains, such as biology and chemistry. This knowledge will support the design of next-generation, automated question-answering (QA) systems. Whereas existing QA systems, including IBM’s Watson system for Jeopardy, have been very successful, they are typically limited to factual question answering. In contrast, Zettlemoyer work aims to, in the long term, enable a machine to automatically read any text book, extract all of the knowledge it contains, and then use this information to pass a college-level exam on the subject matter.
how fast the sector was evolving.
Google, Facebook, Microsoft, Amazon are all actively acquiring talent and technology to advance the personal assistance space beyond what Singh calls the “participatory” nature of it today, to become “anticipatory.”
Here’s just a small taste for the big moves that are being made:
Artificial intelligence is a broad term that covers a number of technologies, including speech recognition, image recognition, and natural language processing.
While artificial intelligence is a fancy name for it, Singh said it is the craft of bringing together a lot of data sources in a meaningful way: “People use a lot of different tools, and they are scattered across the cloud,” he said. “We are trying to help them be better prepared for their day.”