Showing posts with label google’s. Show all posts
Showing posts with label google’s. Show all posts

Announcing Google’s 2015 Global PhD Fellows



In 2009, Google created the PhD Fellowship program to recognize and support outstanding graduate students doing exceptional research in Computer Science and related disciplines. Now in its seventh year, our fellowship programs have collectively supported over 200 graduate students in Australia, China and East Asia, India, North America, Europe and the Middle East who seek to shape and influence the future of technology.

Reflecting our continuing commitment to building mutually beneficial relationships with the academic community, we are excited to announce the 44 students from around the globe who are recipients of the award. We offer our sincere congratulations to Google’s 2015 Class of PhD Fellows!

Australia

  • Bahar Salehi, Natural Language Processing (University of Melbourne)
  • Siqi Liu, Computational Neuroscience (University of Sydney)
  • Qian Ge, Systems (University of New South Wales)

China and East Asia

  • Bo Xin, Artificial Intelligence (Peking University)
  • Xingyu Zeng, Computer Vision (The Chinese University of Hong Kong)
  • Suining He, Mobile Computing (The Hong Kong University of Science and Technology)
  • Zhenzhe Zheng, Mobile Networking (Shanghai Jiao Tong University)
  • Jinpeng Wang, Natural Language Processing (Peking University)
  • Zijia Lin, Search and Information Retrieval (Tsinghua University)
  • Shinae Woo, Networking and Distributed Systems (Korea Advanced Institute of Science and Technology)
  • Jungdam Won, Robotics (Seoul National University)

India

  • Palash Dey, Algorithms (Indian Institute of Science)
  • Avisek Lahiri, Machine Perception (Indian Institute of Technology Kharagpur)
  • Malavika Samak, Programming Languages and Software Engineering (Indian Institute of Science)

Europe and the Middle East

  • Heike Adel, Natural Language Processing (University of Munich)
  • Thang Bui, Speech Technology (University of Cambridge)
  • Victoria Caparrós Cabezas, Distributed Systems (ETH Zurich)
  • Nadav Cohen, Machine Learning (The Hebrew University of Jerusalem)
  • Josip Djolonga, Probabilistic Inference (ETH Zurich)
  • Jakob Julian Engel, Computer Vision (Technische Universität München)
  • Nikola Gvozdiev, Computer Networking (University College London)
  • Felix Hill, Language Understanding (University of Cambridge)
  • Durk Kingma, Deep Learning (University of Amsterdam)
  • Massimo Nicosia, Statistical Natural Language Processing (University of Trento)
  • George Prekas, Operating Systems (École Polytechnique Fédérale de Lausanne)
  • Roman Prutkin, Graph Algorithms (Karlsruhe Institute of Technology)
  • Siva Reddy, Multilingual Semantic Parsing (The University of Edinburgh)
  • Immanuel Trummer, Structured Data Analysis (École Polytechnique Fédérale de Lausanne)
  • Margarita Vald, Security (Tel Aviv University)

North America

  • Waleed Ammar, Natural Language Processing (Carnegie Mellon University)
  • Justin Meza, Systems Reliability (Carnegie Mellon University)
  • Nick Arnosti, Market Algorithms (Stanford University)
  • Osbert Bastani, Programming Languages (Stanford University)
  • Saurabh Gupta, Computer Vision (University of California, Berkeley)
  • Masoud Moshref Javadi, Computer Networking (University of Southern California)
  • Muhammad Naveed, Security (University of Illinois at Urbana-Champaign)
  • Aaron Parks, Mobile Networking (University of Washington)
  • Kyle Rector, Human Computer Interaction (University of Washington)
  • Riley Spahn, Privacy (Columbia University)
  • Yun Teng, Computer Graphics (University of California, Santa Barbara)
  • Carl Vondrick, Machine Perception, (Massachusetts Institute of Technology)
  • Xiaolan Wang, Structured Data (University of Massachusetts Amherst)
  • Tan Zhang, Mobile Systems (University of Wisconsin-Madison)
  • Wojciech Zaremba, Machine Learning (New York University)
Read More..

Pulling Back the Curtain on Google’s Network Infrastructure



Pursuing Google’s mission of organizing the world’s information to make it universally accessible and useful takes an enormous amount of computing and storage. In fact, it requires coordination across a warehouse-scale computer. Ten years ago, we realized that we could not purchase, at any price, a datacenter network that could meet the combination of our scale and speed requirements. So, we set out to build our own datacenter network hardware and software infrastructure. Today, at the ACM SIGCOMM conference, we are presenting a paper with the technical details on five generations of our in-house data center network architecture. This paper presents the technical details behind a talk we presented at Open Network Summit a few months ago.

From relatively humble beginnings, and after a misstep or two, we’ve built and deployed five generations of datacenter network infrastructure. Our latest-generation Jupiter network has improved capacity by more than 100x relative to our first generation network, delivering more than 1 petabit/sec of total bisection bandwidth. This means that each of 100,000 servers can communicate with one another in an arbitrary pattern at 10Gb/s.

Such network performance has been tremendously empowering for Google services. Engineers were liberated from optimizing their code for various levels of bandwidth hierarchy. For example, initially there were painful tradeoffs with careful data locality and placement of servers connected to the same top of rack switch versus correlated failures caused by a single switch failure. A high performance network supporting tens of thousands of servers with flat bandwidth also enabled individual applications to scale far beyond what was otherwise possible and enabled tight coupling among multiple federated services. Finally, we were able to substantially improve the efficiency of our compute and storage infrastructure. As quantified in this recent paper, scheduling a set of jobs over a single larger domain supports much higher utilization than scheduling the same jobs over multiple smaller domains.

Delivering such a network meant we had to solve some fundamental problems in networking. Ten years ago, networking was defined by the interaction of individual hardware elements, e.g., switches, speaking standardized protocols to dynamically learn what the network looks like. Based on this dynamically learned information, switches would set their forwarding behavior. While robust, these protocols targeted deployment environments with perhaps tens of switches communicating between between multiple organizations. Configuring and managing switches in such an environment was manual and error prone. Changes in network state would spread slowly through the network using a high-overhead broadcast protocol. Most challenging of all, the system could not scale to meet our needs.

We adopted a set of principles to organize our networks that is now the primary driver for networking research and industrial innovation, Software Defined Networking (SDN). We observed that we could arrange emerging merchant switch silicon around a Clos topology to scale to the bandwidth requirements of a data center building. The topology of all five generations of our data center networks follow the blueprint below. Unfortunately, this meant that we would potentially require 10,000+ individual switching elements. Even if we could overcome the scalability challenges of existing network protocols, managing and configuring such a vast number of switching elements would be impossible.
So, our next observation was that we knew the exact configuration of the network we were trying to build. Every switch would play a well-defined role in a larger-ensemble. That is, from the perspective of a datacenter building, we wished to build a logical building-scale network. For network management, we built a single configuration for the entire network and pushed the single global configuration to all switches. Each switch would simply pull out its role in the larger whole. Finally, we transformed routing from a pair-wise, fully distributed (but somewhat slow and high overhead) scheme to a logically-centralized scheme under the control of a single dynamically-elected master as illustrated in the figure below from our paper.
Our SIGCOMM paper on Jupiter is one of four Google-authored papers being presented at that conference. Taken together, these papers show the benefits of embedding research within production teams, reflecting both collaborations with PhD students carrying out extended research efforts with Google engineers during internships as well as key insights from deployed production systems:

  • Our work on Bandwidth Enforcer shows how we can allocate wide area bandwidth among tens of thousands of individual applications based on centrally configured policy, substantially improving network utilization while simultaneously isolating services from one another.
  • Condor addresses the challenges of designing data center network topologies. Network designers can specify constraints for data center networks; Condor efficiently generates candidate network designs that meet these constraints, and evaluates these candidates against a variety of target metrics.
  • Congestion control in datacenter networks is challenging because of tiny buffers and very small round trip times. TIMELY shows how to manage datacenter bandwidth allocation while maintaining highly responsive and low latency network roundtrips in the data center.

These efforts reflect the latest in a long series of substantial Google contributions in networking. We are excited about being increasingly open about results of our research work: to solicit feedback on our approach, to influence future research and development directions so that we can benefit from community-wide efforts to improve networking, and to attract the next-generation of great networking thinkers and builders to Google. Our focus on Google Cloud Platform further increases the importance of being open about our infrastructure. Since the same network powering Google infrastructure for a decade is also the underpinnings of our Cloud Platform, all developers can leverage the network to build highly robust, manageable, and globally scalable services.
Read More..

Skill maps analytics and more with Google’s Course Builder 1 8



Over the past couple of years, Google’s Course Builder has been used to create and deliver hundreds of online courses on a variety of subjects (from sustainable energy to comic books), making learning more scalable and accessible through open source technology. With the help of Course Builder, over a million students of all ages have learned something new.

Today, we’re increasing our commitment to Course Builder by bringing rich, new functionality to the platform with a new release. Of course, we will also continue to work with edX and others to contribute to the entire ecosystem.

This new version enables instructors and students to understand prerequisites and skills explicitly, introduces several improvements to the instructor experience, and even allows you to export data to Google BigQuery for in depth analysis.
  • Drag and drop, simplified tabs, and student feedback
We’ve made major enhancements to the instructor interface, such as simplifying the tabs and clarifying which part of the page you’re editing, so you can spend more time teaching and less time configuring. You can also structure your course on the fly by dragging and dropping elements directly in the outline.

Additionally, we’ve added the option to include a feedback box at the bottom of each lesson, making it easy for your students to tell you their thoughts (though we cant promise youll always enjoy reading them).
  • Skill Mapping
You can now define prerequisites and skills learned for each lesson. For instance, in a course about arithmetic, addition might be a prerequisite for the lesson on multiplying numbers, while multiplication is a skill learned. Once an instructor has defined the skill relationships, they will have a consolidated view of all their skills and the lessons they appear in, such as this list for Power Searching with Google:
Instructors can then enable a skills widget that shows at the top of each lesson and which lets students see exactly what they should know before and after completing a lesson. Below are the prerequisites and goals for the Thinking More Deeply About Your Search lesson. A student can easily see what they should know beforehand and which lessons to explore next to learn more.
Skill maps help a student better understand which content is right for them. And, they lay the groundwork for our future forays into adaptive and personalized learning. Learn more about Course Builder skill maps in this video.
  • Analytics through BigQuery
One of the core tenets of Course Builder is that quality online learning requires a feedback loop between instructor and student, which is why we’ve always had a focus on providing rich analytical information about a course. But no matter how complete, sometimes the built-in reports just aren’t enough. So Course Builder now includes a pipeline to Google BigQuery, allowing course owners to issue super-fast queries in a SQL-like syntax using the processing power of Google’s infrastructure. This allows you to slice and dice the data in an infinite number of ways, giving you just the information you need to help your students and optimize your course. Watch these videos on configuring and sending data.

To get started with your own course, follow these simple instructions. Please let us know how you use these new features and what you’d like to see in Course Builder next. Need some inspiration? Check out our list of courses (and tell us when you launch yours).

Keep on learning!
Read More..

Microsoft Challenges Google’s Artificial Brain With ‘Project Adam’

Despite just announcing its biggest ever job cuts, at around 18,000 employees, Microsoft is planning to challenge Googles Artificial Brain with its own Project Adam. Wired reports that: "Like similar deep learning systems, Adam runs across an array of standard computer servers, in this case machines offered up by Microsofts Azure cloud computing service. Deep learning aims to more closely mimic the way the brain works by creating neural networks—systems that behave, at least in some respects, like the networks of neurons in your brain—and typically, these neural nets require a large number of servers.

from The Universal Machine http://universal-machine.blogspot.com/

IFTTT

Put the internet to work for you.

Turn off or edit this Recipe

Read More..

Google’s Course Builder 1 9 improves instructor experience and takes Skill Maps to the next level



(Cross-posted on the Google for Education Blog)

When we last updated Course Builder in April, we said that its skill mapping capabilities were just the beginning. Today’s 1.9 release greatly expands the applicability of these skill maps for you and your students. We’ve also significantly revamped the instructor’s user interface, making it easier for you to get the job done while staying out of your way while you create your online courses.

First, a quick update on project hosting. Course Builder has joined many other Google open source projects on GitHub (download it here). Later this year, we’ll consolidate all of the Course Builder documentation, but for now, get started at Google Open Online Education.

Now, about those features:
  • Measuring competence with skill maps
    In addition to defining skills and prerequisites for each lesson, you can now apply skills to each question in your courses’ assessments. By completing the assessments and activities, learners will be able to measure their level of competence for each skill. For instance, here’s what a student taking Power Searching with Google might see:
This information can help guide them on which sections of the course to revisit. Or, if a pre-test is given, students can focus on the lessons addressing their skill gaps.

To determine how successful the content is at teaching the desired skills across all students, an instructor can review students’ competencies on a new page in the analytics section of the dashboard.

  • Improving usability when creating a course Course Builder has a rich set of capabilities, giving you control over every aspect of your course -- but that doesn’t mean it has to be hard to use. Our goal is to help you spend less time setting up your course and more time educating your students. We’ve completely reorganized the dashboard, reducing the number of tabs and making the settings you need clearer and easier to find.
We also added in-place previewing, so you can quickly edit your content and immediately see how it will look without needing to reload any pages.
For a full list of the other features added in this release (including the ability for students to delete their data upon unenrollment and removal of the old Files API), see the release notes. As always, please let us know how you use these new features and what you’d like to see in Course Builder next to help make your online course even better.

In the meantime, take a look at a couple recent online courses that we’re pretty excited about: Sesame Street’s Make Believe with Math and our very own Computational Thinking for Educators.
Read More..

TensorFlow Google’s latest machine learning system open sourced for everyone



Deep Learning has had a huge impact on computer science, making it possible to explore new frontiers of research and to develop amazingly useful products that millions of people use every day. Our internal deep learning infrastructure DistBelief, developed in 2011, has allowed Googlers to build ever larger neural networks and scale training to thousands of cores in our datacenters. We’ve used it to demonstrate that concepts like “cat” can be learned from unlabeled YouTube images, to improve speech recognition in the Google app by 25%, and to build image search in Google Photos. DistBelief also trained the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014, and drove our experiments in automated image captioning as well as DeepDream.

While DistBelief was very successful, it had some limitations. It was narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure - making it nearly impossible to share research code externally.

Today we’re proud to announce the open source release of TensorFlow -- our second-generation machine learning system, specifically designed to correct these shortcomings. TensorFlow is general, flexible, portable, easy-to-use, and completely open source. We added all this while improving upon DistBelief’s speed, scalability, and production readiness -- in fact, on some benchmarks, TensorFlow is twice as fast as DistBelief (see the whitepaper for details of TensorFlow’s programming model and implementation).
TensorFlow has extensive built-in support for deep learning, but is far more general than that -- any computation that you can express as a computational flow graph, you can compute with TensorFlow (see some examples). Any gradient-based machine learning algorithm will benefit from TensorFlow’s auto-differentiation and suite of first-rate optimizers. And it’s easy to express your new ideas in TensorFlow via the flexible Python interface.
Inspecting a model with TensorBoard, the visualization tool
TensorFlow is great for research, but it’s ready for use in real products too. TensorFlow was built from the ground up to be fast, portable, and ready for production service. You can move your idea seamlessly from training on your desktop GPU to running on your mobile phone. And you can get started quickly with powerful machine learning tech by using our state-of-the-art example model architectures. For example, we plan to release our complete, top shelf ImageNet computer vision model on TensorFlow soon.

But the most important thing about TensorFlow is that it’s yours. We’ve open-sourced TensorFlow as a standalone library and associated tools, tutorials, and examples with the Apache 2.0 license so you’re free to use TensorFlow at your institution (no matter where you work).

Our deep learning researchers all use TensorFlow in their experiments. Our engineers use it to infuse Google Search with signals derived from deep neural networks, and to power the magic features of tomorrow. We’ll continue to use TensorFlow to serve machine learning in products, and our research team is committed to sharing TensorFlow implementations of our published ideas. We hope you’ll join us at www.tensorflow.org.
Read More..