Showing posts with label focused. Show all posts
Showing posts with label focused. Show all posts

Announcing the Google MOOC Focused Research Awards



Last year, Google and Tsinghua University hosted the 2014 APAC MOOC Focused Faculty Workshop, an event designed to share, brainstorm and generate ideas aimed at fostering MOOC innovation. As a result of the ideas generated at the workshop, we solicited proposals from the attendees for research collaborations that would advance important topics in MOOC development.

After expert reviews and committee discussions, we are pleased to announce the following recipients of the MOOC Focused Research Awards. These awards cover research exploring new interactions to enhance learning experience, personalized learning, online community building, interoperability of online learning platforms and education accessibility:

  • “MOOC Visual Analytics” - Michael Ginda, Indiana University, United States
  • “Improvement of students’ interaction in MOOCs using participative networks” - Pedro A. PernĂ­as Peco, Universidad de Alicante, Spain
  • “Automated Analysis of MOOC Discussion Content to Support Personalised Learning” - Katrina Falkner, The University of Adelaide, Australia
  • “Extending the Offline Capability of Spoken Tutorial Methodology” - Kannan Moudgalya, Indian Institute of Technology Bombay, India
  • “Launching the Pan Pacific ISTP (Information Science and Technology Program) through MOOCs” - Yasushi Kodama, Hosei University, Japan
  • “Fostering Engagement and Social Learning with Incentive Schemes and Gamification Elements in MOOCs” - Thomas Schildhauer, Alexander von Humboldt Institute for Internet and Society, Germany
  • “Reusability Measurement and Social Community Analysis from MOOC Content Users” - Timothy K. Shih, National Central University, Taiwan

In order to further support these projects and foster collaboration, we have begun pairing the award recipients with Googlers pursuing online education research as well as product development teams.

Google is committed to supporting innovation in online learning at scale, and we congratulate the recipients of the MOOC Focused Research Awards. It is our belief that these collaborations will further develop the potential of online education, and we are very pleased to work with these researchers to jointly push the frontier of MOOCs.
Read More..

Natural Language Understanding focused awards announced



Some of the biggest challenges for the scientific community today involve understanding the principles and mechanisms that underlie natural language use on the Web. An example of long-standing problem is language ambiguity; when somebody types the word “Rio” in a query do they mean the city, a movie, a casino, or something else? Understanding the difference can be crucial to help users get the answer they are looking for. In the past few years, a significant effort in industry and academia has focused on disambiguating language with respect to Web-scale knowledge repositories such as Wikipedia and Freebase. These resources are used primarily as canonical, although incomplete, collections of “entities”. As entities are often connected in multiple ways, e.g., explicitly via hyperlinks and implicitly via factual information, such resources can be naturally thought of as (knowledge) graphs. This work has provided the first breakthroughs towards anchoring language in the Web to interpretable, albeit initially shallow, semantic representations. Google has brought the vision of semantic search directly to millions of users via the adoption of the Knowledge Graph. This massive change to search technology has also been called a shift “from strings to things”.

Understanding natural language is at the core of Googles work to help people get the information they need as quickly and easily as possible. At Google we work hard to advance the state of the art in natural language processing, to improve the understanding of fundamental principles, and to solve the algorithmic and engineering challenges to make these technologies part of everyday life. Language is inherently productive; an infinite number of meaningful new expressions can be formed by combining the meaning of their components systematically. The logical next step is the semantic modeling of structured meaningful expressions -- in other words, “what is said” about entities. We envision that knowledge graphs will support the next leap forward in language understanding towards scalable compositional analyses, by providing a universe of entities, facts and relations upon which semantic composition operations can be designed and implemented.

So we’ve just awarded over $1.2 million to support several natural language understanding research awards given to university research groups doing work in this area. Research topics range from semantic parsing to statistical models of life stories and novel compositional inference and representation approaches to modeling relations and events in the Knowledge Graph.

These awards went to researchers in nine universities and institutions worldwide, selected after a rigorous internal review:

  • Mark Johnson and Lan Du (Macquarie University) and Wray Buntine (NICTA) for “Generative models of Life Stories”
  • Percy Liang and Christopher Manning (Stanford University) for “Tensor Factorizing Knowledge Graphs”
  • Sebastian Riedel (University College London) and Andrew McCallum (University of Massachusetts, Amherst) for “Populating a Knowledge Base of Compositional Universal Schema”
  • Ivan Titov (University of Amsterdam) for “Learning to Reason by Exploiting Grounded Text Collections”
  • Hans Uszkoreit (Saarland University and DFKI), Feiyu Xu (DFKI and Saarland University) and Roberto Navigli (Sapienza University of Rome) for “Language Understanding cum Knowledge Yield”
  • Luke Zettlemoyer (University of Washington) for “Weakly Supervised Learning for Semantic Parsing with Knowledge Graphs”

We believe the results will be broadly useful to product development and will further scientific research. We look forward to working with these researchers, and we hope we will jointly push the frontier of natural language understanding research to the next level.
Read More..