Showing posts with label understanding. Show all posts
Showing posts with label understanding. Show all posts

Build your own satellite

Honestly! It seems as if increasingly there is no activity that dedicated amateurs arent willing to have a go at. Formerly, building satellites was done by NASA and specialists like the Jet Propulsion Laboratory but thanks to greatly simplified design, called CubeSat, radio amateurs our building and launching their own communications satel lites into low Earth orbit. They even have their own governing organisation called AMSAT. So if you have a project that needs its own communications satellite why not build your own.



from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

Read More..

New Research Challenges in Language Understanding



We held the first global Language Understanding and Knowledge Discovery Focused Faculty Workshop in Nanjing, China, on November 14-15, 2013. Thirty-four faculty members joined the workshop arriving from 10 countries and regions across APAC, EMEA and the US. Googlers from Research, Engineering and University Relations/University Programs also attended the event.

The 2-day workshop included keynote talks, panel discussions and break-out sessions [agenda]. It was an engaging and productive workshop, and we saw lots of positive interactions among the attendees. The workshop encouraged communication between Google and faculty around the world working in these areas.

Research in text mining continues to explore open questions relating to entity annotation, relation extraction, and more. The workshop’s goal was to brainstorm and discuss relevant topics to further investigate these areas. Ultimately, this research should help provide users search results that are much more relevant to them.

At the end of the workshop, participants identified four topics representing challenges and opportunities for further exploration in Language Understanding and Knowledge Discovery:

  • Knowledge representation, integration, and maintenance
  • Efficient and scalable infrastructure and algorithms for inferencing
  • Presentation and explanation of knowledge
  • Multilingual computation

Going forward, Google will be collaborating with academic researchers on a position paper related to these topics. We also welcome faculty interested in contributing to further research in this area to submit a proposal to the Faculty Research Awards program. Faculty Research Awards are one-year grants to researchers working in areas of mutual interest.

The faculty attendees responded positively to the focused workshop format, as it allowed time to go in depth into important and timely research questions. Encouraged by their feedback, we are considering similar workshops on other topics in the future.
Read More..

From Interaction to Understanding

This Thursday evenings Gibbons Lecture is by Mark Billinghurst of the Human Interface Laboratory New Zealand, at The University of Canterbury. The lecture titled From Interaction to Understanding  will focus on Empathic computing where computers can recognise and understand emotions. The lecture is this Thursday the 21st at 6:00pm for a 6:30pm start. Click the lecture link for full venue details and if you cant attend the lecture will be streamed live and after the event.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Delete or edit this Recipe

Read More..

Natural Language Understanding focused awards announced



Some of the biggest challenges for the scientific community today involve understanding the principles and mechanisms that underlie natural language use on the Web. An example of long-standing problem is language ambiguity; when somebody types the word “Rio” in a query do they mean the city, a movie, a casino, or something else? Understanding the difference can be crucial to help users get the answer they are looking for. In the past few years, a significant effort in industry and academia has focused on disambiguating language with respect to Web-scale knowledge repositories such as Wikipedia and Freebase. These resources are used primarily as canonical, although incomplete, collections of “entities”. As entities are often connected in multiple ways, e.g., explicitly via hyperlinks and implicitly via factual information, such resources can be naturally thought of as (knowledge) graphs. This work has provided the first breakthroughs towards anchoring language in the Web to interpretable, albeit initially shallow, semantic representations. Google has brought the vision of semantic search directly to millions of users via the adoption of the Knowledge Graph. This massive change to search technology has also been called a shift “from strings to things”.

Understanding natural language is at the core of Googles work to help people get the information they need as quickly and easily as possible. At Google we work hard to advance the state of the art in natural language processing, to improve the understanding of fundamental principles, and to solve the algorithmic and engineering challenges to make these technologies part of everyday life. Language is inherently productive; an infinite number of meaningful new expressions can be formed by combining the meaning of their components systematically. The logical next step is the semantic modeling of structured meaningful expressions -- in other words, “what is said” about entities. We envision that knowledge graphs will support the next leap forward in language understanding towards scalable compositional analyses, by providing a universe of entities, facts and relations upon which semantic composition operations can be designed and implemented.

So we’ve just awarded over $1.2 million to support several natural language understanding research awards given to university research groups doing work in this area. Research topics range from semantic parsing to statistical models of life stories and novel compositional inference and representation approaches to modeling relations and events in the Knowledge Graph.

These awards went to researchers in nine universities and institutions worldwide, selected after a rigorous internal review:

  • Mark Johnson and Lan Du (Macquarie University) and Wray Buntine (NICTA) for “Generative models of Life Stories”
  • Percy Liang and Christopher Manning (Stanford University) for “Tensor Factorizing Knowledge Graphs”
  • Sebastian Riedel (University College London) and Andrew McCallum (University of Massachusetts, Amherst) for “Populating a Knowledge Base of Compositional Universal Schema”
  • Ivan Titov (University of Amsterdam) for “Learning to Reason by Exploiting Grounded Text Collections”
  • Hans Uszkoreit (Saarland University and DFKI), Feiyu Xu (DFKI and Saarland University) and Roberto Navigli (Sapienza University of Rome) for “Language Understanding cum Knowledge Yield”
  • Luke Zettlemoyer (University of Washington) for “Weakly Supervised Learning for Semantic Parsing with Knowledge Graphs”

We believe the results will be broadly useful to product development and will further scientific research. We look forward to working with these researchers, and we hope we will jointly push the frontier of natural language understanding research to the next level.
Read More..

Building a deeper understanding of images



The ImageNet large-scale visual recognition challenge (ILSVRC) is the largest academic challenge in computer vision, held annually to test state-of-the-art technology in image understanding, both in the sense of recognizing objects in images and locating where they are. Participants in the competition include leading academic institutions and industry labs. In 2012 it was won by DNNResearch using the convolutional neural network approach described in the now-seminal paper by Krizhevsky et al.[4]

In this year’s challenge, team GoogLeNet (named in homage to LeNet, Yann LeCuns influential convolutional network) placed first in the classification and detection (with extra training data) tasks, doubling the quality on both tasks over last years results. The team participated with an open submission, meaning that the exact details of its approach are shared with the wider computer vision community to foster collaboration and accelerate progress in the field.
The competition has three tracks: classification, classification with localization, and detection. The classification track measures an algorithm’s ability to assign correct labels to an image. The classification with localization track is designed to assess how well an algorithm models both the labels of an image and the location of the underlying objects. Finally, the detection challenge is similar, but uses much stricter evaluation criteria. As an additional difficulty, this challenge includes a lot of images with tiny objects which are hard to recognize. Superior performance in the detection challenge requires pushing beyond annotating an image with a “bag of labels” -- a model must be able to describe a complex scene by accurately locating and identifying many objects in it. As examples, the images in this post are actual top-scoring inferences of the GoogleNet detection model on the validation set of the detection challenge.
This work was a concerted effort by Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Drago Anguelov, Dumitru Erhan, Andrew Rabinovich and myself. Two of the team members -- Wei Liu and Scott Reed -- are PhD students who are a part of the intern program here at Google, and actively participated in the work leading to the submissions. Without their dedication the team could not have won the detection challenge.

This effort was accomplished by using the DistBelief infrastructure, which makes it possible to train neural networks in a distributed manner and rapidly iterate. At the core of the approach is a radically redesigned convolutional network architecture. Its seemingly complex structure (typical incarnations of which consist of over 100 layers with a maximum depth of over 20 parameter layers), is based on two insights: the Hebbian principle and scale invariance. As the consequence of a careful balancing act, the depth and width of the network are both increased significantly at the cost of a modest growth in evaluation time. The resultant architecture leads to over 10x reduction in the number of parameters compared to most state of the art vision networks. This reduces overfitting during training and allows our system to perform inference with low memory footprint.
For the detection challenge, the improved neural network model is used in the sophisticated R-CNN detector by Ross Girshick et al.[2], with additional proposals coming from the multibox method[1]. For the classification challenge entry, several ideas from the work of Andrew Howard[3] were incorporated and extended, specifically as they relate to image sampling during training and evaluation. The systems were evaluated both stand-alone and as ensembles (averaging the outputs of up to seven models) and their results were submitted as separate entries for transparency and comparison.

These technological advances will enable even better image understanding on our side and the progress is directly transferable to Google products such as photo search, image search, YouTube, self-driving cars, and any place where it is useful to understand what is in an image as well as where things are.

References:

[1] Erhan D., Szegedy C., Toshev, A., and Anguelov, D., "Scalable Object Detection using Deep Neural Networks", The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 2147-2154.

[2] Girshick, R., Donahue, J., Darrell, T., & Malik, J., "Rich feature hierarchies for accurate object detection and semantic segmentation", arXiv preprint arXiv:1311.2524, 2013.

[3] Howard, A. G., "Some Improvements on Deep Convolutional Neural Network Based Image Classification", arXiv preprint arXiv:1312.5402, 2013.

[4] Krizhevsky, A., Sutskever I., and Hinton, G., "Imagenet classification with deep convolutional neural networks", Advances in neural information processing systems, 2012.

Read More..