Showing posts with label part. Show all posts
Showing posts with label part. Show all posts

Moore’s Law Part 4 Moores Law in other domains

This is the last entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moores Law situation, or promises for such a Law that would drive their future performance.

--

The quest for Moore’s Law and its potential impact in other disciplines is a journey the technology industry is starting, by crossing the Rubicon from the semiconductor industry to other less explored fields, but with the particular mindset created by Moore’s Law. Our goal is to explore if there are Moore’s Law opportunities emerging in other disciplines, as well as its potential impact. As such, we have interviewed several professors and researchers and asked them if they could see emerging ‘Moore’s Laws’ in their discipline. Listed below are some highlights of those discussions, ranging from CS+ to potentials in the Energy Sector:

Sensors and Data Acquisition
Ed Parsons, Google Geospatial Technologist
The More than Moore discussion can be extended to outside of the main chip, and go within the same board as the main chip or within the device that a user is carrying. Greater sensors capabilities (for the measurement of pressure, electromagnetic field and other local conditions) allow including them in smart phones, glasses, or other devices and perform local data acquisition. This trend is strong, and should allow future devices benefiting from Moore’s Law to receive enough data to perform more complex applications.

Metcalfe’s Law states that the value of a telecommunication network is proportional to the square of connected nodes of the system. This law can be used in parallel to Moore’s Law to evaluate the value of the Internet of Things. The network itself can be seen as composed by layers: at the user’s local level (to capture data related to the body of the user, or to immediately accessible objects), locally around the user (such as to get data within the same street as the user), and finally globally (to get data from the global internet). The extrapolation made earlier in this blog (several TB available in flash memory) will lead to the ability to construct, exchange and download/upload entire contexts for a given situation or a given application and use these contexts without intense network activity, or even with very little or no network activity.

Future of Moore’s Law and its impact on Physics
Sverre Jarp, CERN
CERN, and its experiments with the Large Electron-Positron Collider (LEP) and Large Hadron Collider (LHC) generate data on the order of a PetaByte per year; this data has to be filtered, processed and analyzed in order to find meaningful physics events leading to new discoveries. In this context Moore’s Law has been particularly helpful to allow computing power, storage and networking capabilities at CERN and at other High Energy Physics (HEP) centers to scale up regularly. Several generations of hardware and software have been exhausted during the journey from mainframes to today’s clusters.

CERN has a long tradition of collaboration with chip manufacturers, hardware and software vendors to understand and predict next trends in the computing evolution curve. Recent analysis indicates that Moore’s Law will likely continue over the next decade. The statement of ‘several TB of flash memory availability by 2025’ may even be a little conservative according to most recent analysis.

Big Data Visualizations
Katy Börner, Indiana University
Thanks to Moore’s Law, the amount of data available for any given phenomenon, whether sensed or simulated, has been growing by several orders of magnitude over the past decades. Intelligent sampling can be used to filter out the most relevant bits of information and is practiced in Physics, Astronomy, Medicine and other sciences. Subsequently, data needs to be analyzed and visualized to identify meaningful trends and phenomena, and to communicate them to others.

While most people learn in school how to read charts and maps, many never learn how to read a network layout—data literacy remains a challenge. The Information Visualization Massive Open Online Course (MOOC) at Indiana University teaches students from more than 100 countries how to read but also how to design meaningful network, topical, geospatial, and temporal visualizations. Using the tools introduced in this free course anyone can analyze, visualize, and navigate complex data sets to understand patterns and trends.

Candidate for Moore’s Law in Energy
Professor Francesco Stellacci, EPFL
It is currently hard to see a “Moore’s Law” applying to candidates in energy technology. Nuclear fusion could reserve some positive surprises, if several significant breakthroughs are found in the process of creating usable energy with this technique. For any other technology the technological growth will be slower. Best solar cells of today have a 30% efficiency, which could scale higher of course (obviously not much more than a factor of 3). Also cost could be driven down by an order of magnitude. Best estimates show, however, a combined performance improvement by a factor 30 over many years.

Further Discussion of Moore’s Law in Energy
Ross Koningstein, Google Director Emeritus
As of today there is no obvious Moore’s Law in the Energy sector which could decrease some major costs by 50% every 18 months. However material properties at nanoscale, and chemical processes such as catalysis are being investigated and could lead to promising results. Applications targeted are hydrocarbon creation at scale and improvement of oil refinery processes, where breakthrough in micro/nano property catalysts is pursued. Hydrocarbons are much more compatible at scale with the existing automotive/aviation and natural gas distribution systems. Here in California, Google Ventures has invested in Cool Planet Energy Systems, a company with neat technology that can convert biomass to gasoline/jet fuel/diesel with impressive efficiency.

One of the challenges is the ability to run many experiments at low cost per experiment, instead of only a few expensive experiments per year. Discoveries are likely to happen faster if more experiments are conducted. This leads to heavier investments, which are difficult to achieve within slim margin businesses. Therefore the nurturing processes for disruptive business are likely to come from new players, beside existing players which will decide to fund significant new investments.

Of course, these discussions could be opened for many other sectors. The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.
Read More..

Moore’s Law Part 3 Possible extrapolations over the next 15 years and impact



This is the third entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moores Law situation, or promises for such a Law that would drive their future performance.

--

More Moore
We examine data from the ITRS 2012 Overall Roadmap Technology Characteristics (ORTC 2012), and select notable interpolations; The chart below shows chip size trends up to the year 2026 along with the “Average Moore’s Law” line. Additionally, in the ORTC 2011 tables we find data on 3D chip layer increases (up to 128 layers), including costs. Finally, the ORTC 2011 index sheet estimates that the DRAM cost per bit at production will be ~0.002 microcents per bit by ~2025. From these sources we draw three More Moore (MM) extrapolations, that by the year 2025:

  • 4Tb Flash multi-level cell (MLC) memory will be in production
  • There will be ~100 billion transistors per microprocessing unit (MPU)
  • 1TB RAM Memory will cost less than $100


More than Moore
It should be emphasized that “More than Moore” (MtM) technologies do not constitute an alternative or even a competitor to the digital trend as described by Moore’s Law. In fact, it is the heterogeneous integration of digital and non-digital functionalities into compact systems that will be the key driver for a wide variety of application fields. Whereas MM may be viewed as the brain of an intelligent compact system, MtM refers to its capabilities to interact with the outside world and the users.

As such, functional diversification may be regarded as a complement of digital signal and data processing in a product. This includes the interaction with the outside world through sensors and actuators and the subsystem for powering the product, implying analog and mixed signal processing, the incorporation of passive and/or high-voltage components, micro-mechanical devices enabling biological functionalities, and more. While MtM looks very promising for a variety of diversification topics, the ITRS study does not give figures from which “solid” extrapolations can be made. However, we can make safe/not so safe bets going towards 2025, and examine what these extrapolations mean in terms of the user.

Today we have a 1TB hard disk drives (HDD) for $100, but the access speed to data on the disk does not allow to take full advantage of this data in a fully interactive, or even practical, way. More importantly, the size and construction of HDD does not allow for their incorporation into mobile devices, Solid state drives (SSD), in comparison, have similar data transfer rates (~1Gb/s), latencies typically 100 times less than HDD, and have a significantly smaller form factor with no moving parts. The promise of offering several TB of flash memory, cost effectively by 2025, in a device carried along during the day (e.g. smartphone, watch, clothing, etc.) represents a paradigm shift with regard of today’s situation; it will empower the user by moving him/her from an environment where local data needs to be refreshed frequently (as with augmented reality applications) to a new environment where full contextual data will be available locally and refreshed only when critically needed.

If data is pre-loaded in the order of magnitude of TBs, one will be able to get a complete contextual data set loaded before an action or a movement, and the device will dispatch its local intelligence to the user during the progress of the action, regardless of network availability or performance. This opens up the possibility of combining local 3D models and remote inputs, allowing applications like 3D conferencing to become available. The development and use of 3D avatars could even facilitate many social interaction models. To benefit from such applications the use of personal devices such as Google Glass may become pervasive, allowing users to navigate 3D scenes and environments naturally, as well as facilitating 3D conferencing and their “social” interactions.

The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.
Read More..

The Computer Science Pipeline and Diversity Part 2 Some positive signs and looking towards the future



(Cross-posted on the Google for Education Blog)

The disparity between the growing demand for computing professionals and the number of graduates in Computer Science (CS) and Information Technology (IT) has been highlighted in many recent publications. The tiny pipeline of diverse students (women and underrepresented minorities (URMs)) is even more troubling. Some of the factors causing these issues are:
  • The historical lack of STEM (Science, Technology, Engineering and Mathematics) capabilities in our younger students; lack of proficiency has had a substantial impact on the overall number of students pursuing technical careers. (PCAST Stem Ed report, 2010)
  • On the lack of girls in computing, boys often come into computing knowing more than girls because they have been doing it longer. This can cause girls to lose confidence with the perception that computing is a man’s world. Lack of role models, encouragement and relevant curriculum are additional factors that discourage girls’ participation. (Margolis 2003)
  • On the lack of URMs in computing, the best and most enthusiastic minority students are effectively discouraged from pursuing technical careers because of systemic and structural issues in our high schools and communities, and because of unconscious bias of teachers and administrators. (Margolis, 2010)
Over the last 3-4 years, however, we have seen some significant positive signals in STEM education in general, and in CS/IT in particular.
  • Math1 and Science2 results as measured by the National Assessment of Educational Progress (NAEP) have improved slightly since 2009, both in general and for female and minority students.
  • Over the last 10 years, there has been an increase in the number of students earning STEM degrees, but the news on women graduates is not as positive.
“Overall, 40 percent of bachelors degrees earned by men and 29 percent earned by women are now in STEM fields. At the doctoral level, more than half of the degrees earned by men (58 percent) and one-third earned by women (33 percent) are in STEM fields. At the bachelors degree level, though, women are losing ground. Between 2004 and 2014, the share of STEM-related bachelors degrees earned by women decreased in all seven discipline areas: engineering; computer science; earth, atmospheric and ocean sciences; physical sciences; mathematics; biological and agricultural sciences; and social sciences and psychology. The biggest decrease was in computer science, where women now earn 18 percent of bachelors degrees (18 percent). In 2004, women earned nearly a quarter of computer science bachelors degrees, at 23 percent.” - (U.S. News, 2015)
  • There has been a steady growth in investment in education companies, particularly those focused on innovative uses of technology.
  • The number of publications in Google Scholar on STEM education that focus on gender issues or minority students has steadily increased over the last several years.
Results from Google Scholar, using “STEM education minority” and “STEM education gender” as search terms
  • Successful marketing campaigns such as Hour of Code and Made with Code have helped raise awareness on the accessibility and importance of coding, and the diverse career opportunities in CS.
  • There has been growth in developer bootcamps over the last few years, as well as online “learn to code” programs (code.org, CS First, Khan Academy, Codecademy, Blockly Games, PencilCode, etc.), and an increase in opportunities for K12 students to learn coding in their schools. We have also seen non-profits emerge focused specifically on girls and URMs (Technovation, Girls who Code, Black Girls Code, #YesWeCode, etc.)
  • One of the most positive signals has been the growth of graduates in CS over the past few years.
Source: 2013 Taulbee Survey, Computing Research Association
So we are seeing small improvements in K-12 STEM proficiency and undergraduate STEM and CS degrees earned, a significant growth in investment in education innovation, more and more research on the issues of gender and ethnicity in STEM fields and increased opportunities for all students to learn coding skills online, through non-profit programs, through developer boot camps or in their schools.

However, an interesting, and potentially threatening development resulting from this positive momentum is the lack of capacity and faculty in CS departments to handle the increased number of enrollments and majors in CS. Colleges and universities, as a whole, aren’t adequately prepared to handle the surge in CS education demand - Currently there just aren’t enough instructors to teach all the students who want to learn.

This has happened in the past. In the 80’s, with the introduction of the PC, and again during the dot-com boom, interest in CS surged. CS departments managed the load by increasing class sizes as much as they possibly could, and/or they put enrollment caps in place and made CS classes harder. The effect of the former was some faculty left for industry while the effect of the latter was a decrease in the diversity pipeline.

These kinds of caps have two effects which limit access by women and under-represented minorities:
  • First, the students who succeed the most in intro CS are the ones with prior experience.
  • Second, creating these kinds of caps creates a perception of CS as a highly competitive field, which is a deterrent to many students. Those students may not even try to get into CS.”
-(Guzdial, 2014)

If we allow the past to repeat itself, we may again find CS faculty leaving for industry and less diversity students going into the field. In addition, unlike the dot-com boom where interest in CS plummeted with the bust, it’s unlikely we will see a decrease in enrollments, particularly in the introductory CS courses. “CS+X”, which represents the application of CS in other fields, is illustrated by the following sample list of interdisciplinary majors in various universities:
  • Yale: "Computer Science and Psychology is an interdepartmental major..."
  • USC: "B.S in Physics/Computer Science for students with dual interests..."
  • Stanford: "Mathematical and Computational Sciences for students interested in..."
  • Northeastern: "Computer Science/Music Technology dual major for students who want to explore connections between..."
  • Lehigh: "BS in Computer Science and Business integrates..."
  • Dartmouth: "The M.D.-Ph.D. Program in Computational Biology..."
The number of non-major students taking CS courses, particularly the introductory ones, is growing, which makes the capacity issues worse.

At Google, we recently funded a number of universities via our 3X3 award program (3 times the number of students in 3 years), which aims to facilitate innovative, inclusive, and sustainable approaches to address these scaling issues in university CS programs. Our hope is to disseminate and scale the most successful approaches that our university partners develop. A positive development, which was not present when this happened in the past, is the recent innovation in online education and technology. The increase in bandwidth, high-quality content and interactive learning opportunities may help us get ahead of this challenging capacity issue.


1Average mathematics scores for fourth- and eighth-graders in 2013 were 1 point higher than in 2011, and 28 and 22 points higher respectively in comparison to the first assessment year in 1990. Hispanic students made gains in mathematics from 2011 to 2013 at both grades 4 and 8. Fourth- and eighth-grade female students scored higher in mathematics in 2013 than in 2011, but the scores for fourth- and eighth-grade male students did not change significantly over the same period. (Nation’s Report Card)

2The average eighth-grade science score increased two points, from 150 in 2009 to 152 in 2011. Scores also rose among public school students in 16 of 47 states that participated in both 2009 and 2011, and no state showed a decline in science scores from 2009 to 2011. A five-point gain from 2009 to 2011 by Hispanic students was larger than the one-point gain for White students, an improvement that narrowed the score gap between those two groups. Black students scored three points higher in 2011 than in 2009, narrowing the achievement gap with White students. (Nation’s Report Card)
Read More..

Moore’s Law Part 1 Brief history of Moores Law and current state

This is the first entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moores Law situation, or promises for such a Law that would drive their future performance.


---

Moores Law is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of more transistors and their being faster). -Wikipedia

Moore’s Law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper. In it, Moore noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue "for at least ten years". Moore’s prediction has proven to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.

The capabilities of many digital electronic devices are strongly linked to Moores law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at (roughly) exponential rates as well (see Other formulations and similar laws). This exponential improvement has dramatically enhanced the impact of digital electronics in nearly every segment of the world economy, and is a driving force of technological and social change in the late 20th and early 21st centuries.

Most improvement trends have resulted principally from the industry’s ability to exponentially decrease the minimum feature sizes used to fabricate integrated circuits. Of course, the most frequently cited trend is in integration level, which is usually expressed as Moore’s Law (that is, the number of components per chip doubles roughly every 24 months). The most significant trend is the decreasing cost-per-function, which has led to significant improvements in economic productivity and overall quality of life through proliferation of computers, communication, and other industrial and consumer electronics.

Transistor counts for integrated circuits plotted against their dates of introduction. The curve shows Moores law - the doubling of transistor counts every two years. The y-axis is logarithmic, so the line corresponds to exponential growth

All of these improvement trends, sometimes called “scaling” trends, have been enabled by large R&D investments. In the last three decades, the growing size of the required investments has motivated industry collaboration and spawned many R&D partnerships, consortia, and other cooperative ventures. To help guide these R&D programs, the Semiconductor Industry Association (SIA) initiated the National Technology Roadmap for Semiconductors (NTRS) in 1992. Since its inception, a basic premise of the NTRS has been that continued scaling of electronics would further reduce the cost per function and promote market growth for integrated circuits. Thus, the Roadmap has been put together in the spirit of a challenge—essentially, “What technical capabilities need to be developed for the industry to stay on Moore’s Law and the other trends?”

In 1998, the SIA was joined by corresponding industry associations in Europe, Japan, Korea, and Taiwan to participate in a 1998 update of the Roadmap and to begin work toward the first International Technology Roadmap for Semiconductors (ITRS), published in 1999. The overall objective of the ITRS is to present industry-wide consensus on the “best current estimate” of the industry’s research and development needs out to a 15-year horizon. As such, it provides a guide to the efforts of companies, universities, governments, and other research providers or funders. The ITRS has improved the quality of R&D investment decisions made at all levels and has helped channel research efforts to areas that most need research breakthroughs.

For more than half a century these scaling trends continued, and sources in 2005 expected it to continue until at least 2015 or 2020. However, the 2010 update to the ITRS has growth slowing at the end of 2013, after which time transistor counts and densities are to double only every three years. Accordingly, since 2007 the ITRS has addressed the concept of functional diversification under the title “More than Moore” (MtM). This concept addresses an emerging category of devices that incorporate functionalities that do not necessarily scale according to “Moores Law,” but provide additional value to the end customer in different ways.

The MtM approach typically allows for the non-digital functionalities (e.g., RF communication, power control, passive components, sensors, actuators) to migrate from the system board-level into a particular package-level (SiP) or chip-level (SoC) system solution. It is also hoped that by the end of this decade, it will be possible to augment the technology of constructing integrated circuits (CMOS) by introducing new devices that will realize some “beyond CMOS” capabilities. However, since these new devices may not totally replace CMOS functionality, it is anticipated that either chip-level or package level integration with CMOS may be implemented.

The ITRS provides a very comprehensive analysis of the perspective for Moore’s Law when looking towards 2020 and beyond. The analysis can be roughly segmented into two trends: More Moore (MM) and More than Moore (MtM). In the next blog in this series, we will look in the the recent conclusions mentioned in the ITRS 2012 report on both trends.

The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.
Read More..

The Computer Science Pipeline and Diversity Part 1 How did we get here



(Cross-posted on the Google for Education Blog)

For many years, the Computer Science industry has struggled with a pipeline problem. Since 2009, when the number of undergraduate computer science (CS) graduates hit a low mark, there have been many efforts to increase the supply to meet an ever-increasing demand. Despite these efforts, the projected demand over the next seven years is significant.
Source: 2013 Taulbee Survey, Computing Research Association
Even if we are able to sustain a positive growth in graduation rates over the next 7 years, we will only fill 30-40% of the available jobs.

“By 2022, the computer and mathematical occupations group is expected to yield more than 1.3 million job openings. However, unlike in most occupational groups, more job openings will stem from growth than from the need to replace workers who change occupations or leave the labor force.” -Bureau of Labor Statistics Occupational Projection Report, 2012.

More than 3 in 4 of these 1.3M jobs will require at least a Bachelor’s degree in CS or an Information Technology (IT) area. With our current production of only 16,000 CS undergraduates per year, we are way off the mark. Furthermore, within this too-small pipeline of CS graduates, is an even smaller supply of diverse - women and underrepresented minority (URM) - students. In 2013, only 14% of graduates were women and 20% URM. Why is this lack of representation important?
  • The workforce that creates technology should be representative of the people who use it, or there will be an inherent bias in design and interfaces.
  • If we get women and URMs involved, we will fill more than 30-40% of the projected jobs over the next 7 years.
  • Getting more women and URMs to choose computing occupations will reduce social inequity, since computing occupations are among the fastest-growing and pay the most.
Why are so few students interested in pursuing computing as a career, particularly women and URMs? How did we get here?

One fundamental reason is the lack of STEM (Science, Technology, Engineering and Mathematics) capabilities in our younger students. Over the last several years, international comparisons of K12 students’ performance in science and mathematics place the U.S. in the middle of the ranking or lower. On the National Assessment of Educational Progress, less than one-third of U.S. eighth graders show proficiency in science and mathematics. Lack of proficiency has led to lack of engagement in technical degree programs, which include CS and IT.

“In the United States, about 4% of all bachelor’s degrees awarded in 2008 were in engineering. This compares with about 19% throughout Asia and 31% in China specifically. In computer sciences, the number of bachelor’s and master’s degrees awarded decreased sharply from 2004 to 2007.”  -NSF: Higher Education in Science and Engineering.

The lack of proficiency has had a substantial impact on the overall number of students pursuing technical careers, but there have also been shifts resulting from trends and events in the technology sector that compound the issue. For example, we saw an increase in CS graduates from 1997 to the early 2000’s which reflected the growth of the dot-com bubble. Students, seeing the financial opportunities, moved increasingly toward technical degree programs. This continued until the collapse, after which a steady decrease occurred, perhaps as a result of disillusionment or caution.

Importantly, there are additional factors that are minimizing the diversity of individuals, particularly women, pursuing these fields. It’s important to note that there are no biological or cognitive reasons that justify a gender disparity in individuals participating in computing (Hyde 2006). With similar training and experience, women perform just as well as men in computer-related activities (Margolis 2003). But there can be important differences in reinforced predilections and interests during childhood that affect the diversity of those choosing to pursue computer science .

In general, most young boys build and explore; play with blocks, trains, etc.; and engage in activity and movement. For a typical boy, a computer can be the ultimate toy that allows him to pursue his interests, and this can develop into an intense passion early on. Many girls like to build, play with blocks, etc. too. For the most part, however, girls tend to prefer social interaction. Most girls develop an interest in computing later through social media and YouTubers, girl-focused games, or through math, science and computing courses. They typically do not develop the intense interest in computing at an early age like some boys do – they may never experience that level of interest (Margolis 2003).

Thus, some boys come into computing knowing more than girls because they have been doing it longer. This can cause many girls to lose confidence and drive during adolescence with the perception that technology is a man’s world - Both girls and boys perceive computing to be a largely masculine field (Mercier 2006). Furthermore, there are few role models at home, school or in the media changing the perception that computing is just not for girls. This overall lack of support and encouragement keeps many girls from considering computing as a career. (Google white paper 2014)

In addition, many teachers are oblivious to or support the gender stereotypes by assigning problems and projects that are oriented more toward boys, or are not of interest to girls. This lack of relevant curriculum is important. Many women who have pursued technology as a career cite relevant courses as critical to their decision (Liston 2008).

While gender differences exist with URM groups as well, there are compelling additional factors that affect them. Jane Margolis, a senior researcher at UCLA, did a study in 2000 resulting in the book Stuck in the Shallow End. She and her research group studied three very different high schools in Los Angeles, with different student demographics. The results of the study show that across all three schools, minority students do not get the same opportunities. While all of the students have access to basic technology courses (word processor, spreadsheet skills, etc.), advanced CS courses are typically only made available to students who, because of opportunities they already have outside school, need it less. Additionally, the best and most enthusiastic minority students can be effectively discouraged because of systemic and structural issues, and belief systems of teachers and administrators. The result is a small, mostly homogeneous group of students have all the opportunities and are introduced to CS, while the rest are relegated to the “shallow end of computing skills”, which perpetuates inequities and keeps minority students from pursuing computing careers.

These are some of the reasons why the pipeline for technical talent is so small and why the diversity pipeline is even smaller. Over the last two years, however, we are starting to see some positive signs.
  • Many students are becoming more aware of the relevance and accessibility of coding through campaigns such as Hour of Code and Made with Code.
  • This increase in awareness has helped to produce a steady increase in CS and IT graduates, and there’s every indication this growth will continue.
  • More opportunities to participate in CS-related activities are becoming available for girls and URMs, such as CS First, Technovation, Girls who Code, Black Girls Code, #YesWeCode, etc.
There’s much more that can be done to reinforce these positive trends, and to get more students of all types to pursue computing as a career. This is important not only to high tech, but is critical for our nation to compete globally. In the next post of this series, we will explore some of the positive steps that have been taken in increasing the diversity of graduates in Computer Science (CS) and Information Technology (IT) fields.
Read More..

Moore’s Law Part 2 More Moore and More than Moore

This is the second entry of a series focused on Moore’s Law and its implications moving forward, edited from a White paper on Moore’s Law, written by Google University Relations Manager Michel Benard. This series quotes major sources about Moore’s Law and explores how they believe Moore’s Law will likely continue over the course of the next several years. We will also explore if there are fields other than digital electronics that either have an emerging Moores Law situation, or promises for such a Law that would drive their future performance.

--

One of the fundamental lessons derived for the past successes of the semiconductor industry comes for the observation that most of the innovations of the past ten years—those that indeed that have revolutionized the way CMOS transistors are manufactured nowadays—were initiated 10–15 years before they were incorporated into the CMOS process. Strained silicon research began in the early 90s, high-?/metal-gate initiated in the mid-90s and multiple-gate transistors were pioneered in the late 90s. This fundamental observation generates a simple but fundamental question: “What should the ITRS do to identify now what the extended semiconductor industry will need 10–15 years from now?”
- International Technology Roadmap for Semiconductors 2012

More Moore
As we look at the years 2020–2025, we can see that the physical dimensions of CMOS manufacture are expected to be crossing below the 10 nanometer threshold. It is expected that as dimensions approach the 5–7 nanometer range it will be difficult to operate any transistor structure that is utilizing the metal-oxide semiconductor (MOS) physics as the basic principle of operation. Of course, we expect that new devices, like the very promising tunnel transistors, will allow a smooth transition from traditional CMOS to this new class of devices to reach these new levels of miniaturization. However, it is becoming clear that fundamental geometrical limits will be reached in the above timeframe. By fully utilizing the vertical dimension, it will be possible to stack layers of transistors on top of each other, and this 3D approach will continue to increase the number of components per square millimeter even when horizontal physical dimensions will no longer be amenable to any further reduction. It seems important, then, that we ask ourselves a fundamental question: “How will we be able to increase the computation and memory capacity when the device physical limits will be reached?” It becomes necessary to re-examine how we can get more information in a finite amount of space.

The semiconductor industry has thrived on Boolean logic; after all, for most applications the CMOS devices have been used as nothing more than an “on-off” switch. Consequently, it becomes of paramount importance to develop new techniques that allow the use of multiple (i.e., more than 2) logic states in any given and finite location, which evokes the magic of “quantum computing” looming in the distance. However, short of reaching this goal, a field of active research involves increasing the number of states available, e.g. 4–10 states, and to increase the number of “virtual transistors” by 2 every 2 years.


More than Moore
During the blazing progress propelled by Moore’s Law of semiconductor logic and memory products, many “complementary” technologies have progressed as well, although not necessarily scaling to Moore’s Law. Heterogeneous integration of multiple technologies has generated “added value” to devices with multiple applications, beyond the traditional semiconductor logic and memory products that had lead the semiconductor industry from the mid 60s to the 90s. A variety of wireless devices contain typical examples of this confluence of technologies, e.g. logic and memory devices, display technology, microelectricomechanical systems (MEMS), RF and Analog/Mixed-signal technologies (RF/AMS), etc.

The ITRS has incorporated More than Moore and RF/AMS chapters in the main body of the ITRS, but is uncertain whether this is sufficient to encompass the plethora of associated technologies now entangled into modern products, or the multi-faceted public consumer who has become an influential driver of the semiconductor industry, demanding custom functionality in commercial electronic products. In the next blog of this series, we will examine select data from the ITRS Overall Roadmap Technology Characteristics (ORTC) 2012 and attempt to extrapolate the progress in the next 15 years, and its potential impact.

The opportunities for more discourse on the impact and future of Moore’s Law on CS and other disciplines are abundant, and can be continued with your comments on the Research at Google Google+ page. Please join, and share your thoughts.
Read More..

Introducing Structured Snippets now a part of Google Web Search



Google Web Search has evolved in recent years with a host of features powered by the Knowledge Graph and other data sources to provide users with highly structured and relevant data. Structured Snippets is a new feature that incorporates facts into individual result snippets in Web Search. As seen in the example below, interesting and relevant information is extracted from a page and displayed as part of the snippet for the query “nikon d7100”:
The WebTables research team has been working to extract and understand tabular data on the Web with the intent to surface particularly relevant data to users. Our data is already used in the Research Tool found in Google Docs and Slides; Structured Snippets is the latest collaboration between Google Research and the Web Search team employing that data to seamlessly provide the most relevant information to the user. We use machine learning techniques to distinguish data tables on the Web from uninteresting tables, e.g., tables used for formatting web pages. We also have additional algorithms to determine quality and relevance that we use to display up to four highly ranked facts from those data tables. Another example of a structured snippet for the query “superman”, this time as it appears on a mobile phone, is shown below:
Fact quality will vary across results based on page content, and we are continually enhancing the relevance and accuracy of the facts we identify and display. We hope users will find this extra snippet information useful.
Read More..