Saturday 3 October 2015

Nvidia renames, formally launches new GeForce Now game streaming service


For the past year, Nvidia’s GeForce Grid service has provided Shield owners with the ability to stream games from remote servers. The service has now gone live as GeForce Now, an $8 per month streaming service. Nvidia claims to offer a 1080p-quality service at 60 FPS at that $8 rate, with the first three months included free (the service launches on October 1 for North America, the EU, and Japan.
Right now, the service offers more than 50 titles, including the first three Batman Arkham titles, multiple Lego-themed games, Orcs Must Die (a personal favorite), Darksiders, and The Walking Dead. Multiple Grid titles are also available, as is the original Borderlands and The Witcher 2.
GFGames
Nvidia is talking a good game with its promises of speed and latency, but it’s important to remember that much of GeForce Now’s performance will depend on your ISP, not Nvidia itself. While Nvidia’s PR talks up the fact that it has “optimized every piece of the technology behind GeForce NOW for gaming,” it can’t optimize the quality of your Internet connection or the consistency with which you receive content. In order to maintain a 60 FPS frame rate, new frames need to be delivered extremely quickly. Nvidia’s previous latency slides have implied that GeForce Grid could match console play, but that’s going to depend on your Internet connection.
gfgrid-latency-chart
One other note about Nvidia and the Shield ecosystem. If you buy an Nvidia controller and plan to take it back and forth across multiple devices, bear in mind that the controller requires GeForce Experience to be installed on a PC in order to function — and GeForce Experience doesn’t work with AMD or Intel GPUs. If you need a controller that can play on multiple devices and you aren’t willing to buy 100% into the NV ecosystem (something that’s increasingly hard to do, since an increasing number of laptops don’t contain discrete GPUs), you’ll need to buy a second controller.
Nvidia has talked about wanting to become the Netflix of gaming, but it’s lock-ins like this that will make ubiquitous market domination difficult. Netflix is Netflix precisely because you can stream it to practically every device manufactured in the past five years. TVs, consoles, PCs, smartphones — Netflix runs on all of them. Nvidia’s GeForce Now service, in contrast, runs only on Nvidia’s Shield. Even the company’s controllers are only compatible with PCs if you have an Nvidia card installed — and, of course, Nvidia locks out customers from using GameWorks or PhysX on hybrid systems with an AMD GPU installed.
If Nvidia is serious about becoming a dominant force in game streaming that can compete with Sony’s PlayStation Now, it’s going to have to eventually open its ecosystem and begin attracting a wider range of customers. Still, GeForce Now is a first step, not an endgame — we’ll have to see if the company’s service can match its promised performance.

IBM breakthrough improves carbon nanotube scaling below 10nm


Over the past few years, IBM has poured a great deal of time and effort into researching carbon nanotubes (CNTs). The existence of single-walled carbon nanotubes and their marvelous semiconductor properties occurred independently at both NEC and IBM, and Big Blue has been interested in capitalizing on that discovery for well over a decade. IBM researchers have now published a paper in which they claim to have demonstrated highly beneficial scaling capabilities in carbon nanotubes.
We’ve discussed the difficulties of scaling semiconductors as the distance between features shrinks with every passing generation, but the specific breakthrough IBM is claiming is in an area of chip design we haven’t discussed much. In conventional silicon (or conventional carbon nanotubes, for that matter), there’s been a known problem — as semiconductors continue to shrink, the contact area between the metal and semiconductor hasn’t been scaling. Generally speaking, smaller contact areas lead to increased resistance, and increased resistance means higher heat. Manufacturers have fought back against these trends with a variety of methods, but the lack of contact scaling is one of the fundamental barriers to pushing silicon to ever-smaller sizes.
Carbon nanotubes
Image by IBM Research
IBM thinks its carbon nanotube technology could solve that problem. EETimes has an excellent write-up of the technology, though it hilariously refers to EUV lithography as “already in place,” — a declaration that I’m certain would surprise both Intel and TSMC. With our recent breakthrough, “Shu-Jen Han, IBM manager of nanoscale science and technology at its T.J. Watson Research Center (Yorktown, Heights) told EE Times, “we now know how to scale [the contact] so it is no longer the limiting factor for carbon nanotube transistors. Our new contacts are measured in angstroms and have just 36 k-ohms of resistance, including both ends.”
The new approach involves welding — nanowelding — a nanotube with molybdenum before they are self-aligned as transistor channels. The final step is to heat the assembly to 850C, melting the molybdenum off and creating carbide. According to Richard Doherty, of Envision Engineering, this solution gives IBM a unique advantage in scaling all the way down to 1.8nm. According to EETimes, IBM may be prepping this technology to be ready at the 5nm node, for introduction at 3nm and below. With the method already proven in theory at 9nm, there seems to be little barrier to further scaling.

A nanotube future?

There are, however, some pointed caveats to these findings. First, there’s the fact that IBM is only currently capable of building p-type transistors using this method. That doesn’t mean that the technology is useless — many of the proposed near-term solutions for improved silicon scaling rely on different materials for the p-channel and n-channel, but it definitely introduces additional complexity.
The International Technical Roadmap for Semiconductors hasn’t issued new reports since 2013; the group is currently evaluating changes to its measuring criteria and formulating new reports, but the 2013 data set is still online. Looking back at it, the roadmap for near-term introduction of carbon nanotubes wasn’t very rosy.
CNTs
This chart shows the suitability of new materials compared to current methods, as well as the dates at which they might be introduced. Carbon nanotubes scored particularly badly in property control, contact viarability, and control of formation, location, and direction. IBM has claimed to have made substantial advances in all three areas since this report was written. In a 2014 discussion with The Register, IBM Researcher’s director of Physical Science, Supratik Guha stated: “You have to make carbon nanotubes with purity levels that are six nines. Today we are at four nines and over a year ago [we were] at 98.5 percent.” This aligns with the values listed in the 2013 ITRS reports, which were far too low to use for semiconductor manufacturing. IBM is pouring billions into researching CNTs, including each of these problems.
Keeping in mind that each additional “9” means a full order of magnitude improvement, CNTs still had a very long way to go in 2014. Solving the contact problem, however, would clear a substantial hurdle, possibly allowing for long-term adoption. If the goal is to bring the technology in at the 3nm node, there’s plenty of time to wait — while multiple sources that wrote up this story skipped this minor point, this prediction is a long time out. With nodes now shifting roughly every 30 months, we’re two years from 10nm and 5 years from 5nm. That puts the introduction of CNTs at the 3nm node, ~2023.

Oculus Rift now expected to cost more than $350


Ever since the Oculus Rift debuted on Kickstarter, there have been questions about how much the headset would cost. The first dev kit cost $300 when purchased as part of the Kickstarter and the second dev kit, which improved on the platform in a number of ways, weighs in at $350. Analysts had expected that Oculus would attempt to bring the Oculus headset to market at around that price point, since $350 is already an incredibly steep price for a nascent platform, but CEO Palmer Lucky has shattered that expectation.
World War Toons, one of a number of VR titles in development
World War Toons, one of a number of VR titles in development
According to an new interview published by RoadtoVR, Lucky has now stated that the kits will be more than $350, though he’s not willing to state how much. Lucky explains that the Oculus Rift has added a great deal of additional technology since the days of the DK1 and DK2. According to him, the Rift is designed to be the best VR experience, hands down:
“It would really suck if you put something out there and people were like ‘Ah man… the Rift is good, but it’s not quite there, you know? If only it was a little better, if the lenses were a little better, if the resolution was a little better, if the screens had been a little bit better, then it would be great because you’d you’d say God, we could have just charged a little more and put a little bit more money into custom hardware and actually achieve that.’… I can’t tell you that it’s going to be $350, and I would say I think people are going to be happy with what they get for the price because I really do think it’s going to be that best VR headset you can buy.”

How the perfect is the enemy of good

There are multiple reasons why this price is unlikely to sit well with the company’s fans. First, the entire point of the Facebook acquisition was supposed to be to give Oculus funds that would allow it to bring products to market that didn’t cost this much money. $400 (the minimum likely price point) is the same as what people would shell out for a PS4 or Xbox One bundle, which would contain multiple games. It’s the cost of a high-end PC video card or a 42-inch 1080p TV. That’s a huge commitment for an utterly unproven technology with few-to-no shipping titles on launch day. I’m certain there’ll be a game or two and some tech demos, but it’s going to take years before VR is widely integrated in games, assuming it achieves critical mass at all.
Eve: Valkyrie
Eve: Valkyrie
Next, there’s the unflattering comparison against other VR solutions. It’s all well and good to aim for the top of the market, but that tends to only work when you’ve got a track record of delivering premium products. Companies like Apple have pulled this off before, but Oculus isn’t Apple. Perhaps more importantly, committing to a $400 Oculus also means buying a system capable of using that hardware effectively. The Oculus Rift may offer a vastly superior experience to, say, the Gear VR, but you can buy four Gear VR headsets for the price of a single Oculus Rift. That comparison isn’t flattering.
The final problem is this: Oculus wants to deliver the premiere VR experience, but a $400 price tag guarantees that if the mass market adopts VR, it won’t use Oculus hardware — it’ll use equipment from Samsung or another low-cost manufacturer. This, in turn, means that whether VR sinks or fails will depend entirely on the experience of using VR on someone else’s hardware. If consumers buy low-end VR hardware and hate it, they’ll blame VR as a poor use of technology.
I’m torn on this point, because I think high-quality VR experiences are critical to achieving acceptance for the platform — but if those experiences cost $400 or more, it’s unlikely that VR will ever achieve critical mass.

Sensible maneuver or Oculus Grift?

I believe Lucky when he says he wants to build the premiere VR headset experience you can have today, but I’m not at all convinced he’s made the right call on this one. Much will depend on how Sony’s PlayStation VR and the HTC Vive are priced. If Oculus comes in under these solutions, it could still win significant marketshare for itself, even if the high price tag keeps most users on the sidelines.
Right now, it looks as though Oculus has priced itself neatly out of the market. At $400+, users are going to look for other solutions — and companies like HTC could make a killing on selling “good enough” hardware. By staking an early claim to best-in-class, Lucky may have ensured that the Rift becomes irrelevant.

Loophole in 1970 Clean Air Act may prevent criminal prosecution of VW


Ever since the VW scandal broke two weeks ago, there’ve been ongoing questions as to the types of penalties VW might face. Not only did the company deliberately include so-called “defeat devices,” designed to allow their diesel vehicles to spew up to 40x more pollutants than allowed under US law, they lied to investigators and misrepresented the nature of the problem, all while continuing to market these vehicles as “clean diesel” and a practical alternative to hybrids or electric cars. Given that VW has admitted that it lied and defrauded more than 11 million customers (closer to 15 million if you include vehicles from Audi and Skoda), it may not be possible to bring criminal charges against the company.
According to the Wall Street Journal, the section of the Clean Air Act of 1970 specifies criminal penalties for fixed sources of emissions (power plants), but only specifies civil penalties for automakers and other moving sources of air pollution. Our own cursory investigation of the Clean Air Act confirms this — while an omission of this sort does not automatically mean the government cannot bring a criminal charges against a company, the law does not explicitly carve out criminal penalties.
GraphVW
Emissions measured on US vehicles. The left two are made by VW
The WSJ reports that the government is looking to bring charges on different counts, including lying to federal officials, but the government may choose to pursue civil, rather than criminal charges in any case. Civil charges would allow VW’s staff to avoid jail time, but could still carry staggering fines that could bankrupt the company. Pressure on VW has been growing from all sides; the company recently declared that it had a plan in place to bring all vehicles into proper compliance with both European and US law. The only way for VW to have developed that plan so quickly is if it simply intends to update vehicle software to turn the “test” mode feature on full-time. While likely to work, it’s also likely to anger many VW drivers, who will see engine performance, fuel economy, or maintenance costs increase as a result of the changes. The first class action lawsuits against VW, needless to say, are already rolling forward.
At Slate, David Auerbach makes the argument that the man almost certainly responsible for the VW scandal has, thus far, escaped notice. According to Auerbach, Ferdinand Piech, the grandson of Ferdinand Porsche and chairman of VW’s board until this past spring, hand-selected the executives that ran the affected businesses and presided over the culture that allowed to him to be installed. VW, unlike most ostensibly public companies, holds the vast majority of its voting power in private hands and operates with a rigid, top-down, hierarchical and dynastic power structure. We don’t know yet which executives knew about the defeat devices or who ordered them installed, but given that problems are now cropping up across VW’s various product families, it seems obvious that this problem went deeper than just the CEO.

EU probes Samsung LCD efficiency claims as real-world testing show huge discrepancies

The EU has announced that it will investigate multiple reports that Samsung’s LCD panels cheat on energy efficiency by detecting when standardized tests are run and adjusting dynamic brightness levels in order to obfuscate their actual panel use. Samsung has denied any wrongdoing — while the company acknowledges that power use can vary significantly by content, it claims that the observed differences are caused by general energy efficiency measures rather than any attempt to obfuscate a test. The EU is unlikely to accept any company’s word on such topics in the near-future, however — not after the recent VW scandal that’s rocked the auto industry.
Samsung’s hardware was tested by the EU’s ComplianTV group, which found that Samsung’s “motion lighting” feature reduced power consumption under internationally accepted test conditions, but not when television sets were deployed in normal use. A second Swedish group found evidence of additional cheating. “The Swedish Energy Agency’s Testlab has come across televisions that clearly recognize the standard film (IEC) used for testing… These displays immediately lower their energy use by adjusting the brightness of the display when the standard film is being run. This is a way of avoiding the market surveillance authorities and should be addressed by the commission.”
Unfortunately, there’s a confounding variable in all of this. Samsung may or may not be cheating on these standardized tests, but power consumption in an LCD panel can absolutely vary depending on what you’re doing with it. In his recent LCD vs. OLED shootout, Dr. Soneira posted power efficiency figures from multiple types of content between Samsung and LG displays:
PowerConsumption
Click to enlarge
What’s tricky about this situation is that it’s possible for all sets of claims to be true. Samsung claims that its motion lighting feature is meant to reduce power consumption in certain types of content, the EU is investigating claims that Samsung cheats, and it’s obvious that different types of content produce very different levels of energy consumption. What this will come down to is whether Samsung and other TV manufacturers are claiming to hit efficiency targets in specific types of content that they don’t actually deliver, and whether the TVs operate in the same mode when displaying standard content as with test content.
To some extent, this illustrates the problem with trying to create formal metrics for measuring performance. On the one hand, you want to have a unified set of tests that any organization can deploy to test common hardware. Designing custom benchmarks or sequences is extremely time-consuming, and there’s always the chance that you may miss some type of content that ought to have been measured, but wasn’t. As TVs become more sophisticated, however, they have gained the ability to detect and obfuscate certain kinds of tests.
These results also highlight the dangers of allowing companies to self-test and report the efficiency of their own hardware. Allowing the fox to guard the henhouse may save dollars in the short run, but it’s an inevitable recipe for long-term trouble

Amazon will no longer sell Chromecast, Apple TV


For more than 15 years, Amazon has worked to build a reputation as the “everything” store, where virtually any product you might want to (legally) purchase can be found and ordered. The company has now announced that this will no longer be the case. By October 29, no listings for either Google’s Chromecast or Apple’s Apple TV will be accepted at Amazon. The company’s reasoning?
“Over the last three years, Prime Video has become an important part of Prime,” an Amazon representative stated. “It’s important that the streaming media players we sell interact well with Prime Video in order to avoid customer confusion. Roku, XBOX, PlayStation and Fire TV are excellent choices.”
amazon_fire_tv
The reason Amazon is making this change has nothing to do with the usefulness of Prime Video. By most measurements, less than 20% of Amazon users buy Amazon Prime in the first place. Furthermore, there’s absolutely nothing stopping Amazon from creating an Android application that includes Chromecast support. Granted, Apple’s walled garden is a much tougher nut to crack, but there are already workarounds and options available on the Android side of the equation.
The fact that Amazon is only banning two competitor devices is also quite indicative of the company’s actual end goals. How many TVs, after all, don’t support streaming Amazon Prime video? How many tablets? Why has this suddenly become an issue now that new Chromecast and Apple TV’s have launched, rather than months ago?

From Everything Store to walled garden

Amazon’s CEO, Jeff Bezos, has previously boasted that Amazon Prime customers spend an average of $1,500 per year at Amazon, compared to just $625 for regular shoppers. If that’s true, it means that Amazon Prime customers account for a disproportional amount of the site’s overall revenue — and Amazon has every reason to try to lock them into its own ecosystem. This push explains the increased ad presence on the newly designed FireOS, as well as the generally lower-quality experience on these devices — Amazon is stepping up its lock-in.
This has nothing to do with customer confusion. Instead, it’s a push to create an Amazon-like wall around video content, the same way Apple walls basically everything, and Google locks down Android. For a company that often credits its own operating system as the anti-Google when it comes to not gathering personal information about its users, Amazon is all too willing to enact lockouts that ensure customers have choice but to use its own products.
For a company that squawked so loudly about Apple’s illegal collusion with publishers, Amazon has no problems abusing its own dominant market position to ensure customers are guided towards its own products and away from others. Roku ought to be looking over its shoulder. Netflix would almost certainly already be on the chopping block, if Amazon thought it could get away with it.

NASA releases stunning new photo of Pluto’s moon Charon

New Horizons has put Pluto in its rear-view mirror, but NASA still has many gigabytes of data to download from the probe. As the images and readings trickle back, we’re gaining a greater understanding of the former ninth planet, but also of its moons. The latest image to be released by NASA shows Pluto’s largest moon Charon, and it’s much more lumpy and uneven than you probably expected.
During its flyby of the Pluto system, New Horizons got within 17,000 miles (27,000km) of Charon. This object is about 600 miles in diameter and is massive enough that it actually forms a binary system with Pluto — they both orbit around a single barycenter near Pluto. The images we had of Pluto before the New Horizons mission were not great, but Charon was even more mysterious. The newly released photo reveals a plethora of fascinating geological features, though.
Charon’s surface turns out to be very irregular with mountains, craters, and a giant 1000 mile-long canyon stretching across the middle. It’s four times longer than the grand canyon, and that’s just the part of it we can see. New Horizons was just doing a flyby of the Pluto system, so it didn’t get a look at the other side of the moon. NASA researchers have entertained the possibility that the network of canyons stretches all the way around the planet.
nh-pluto-charon-v2-10-1-15
The plains south of the canyons are remarkably smooth compared to the craggy, pockmarked surface to the north. Scientists say this indicates significant resurfacing of Charon has occurred in the not-too-distant past. So like Pluto, it might be more geologically active than we thought. The likelihood of seeing so much geological activity on a small moon at the edge of the solar system was seen as very unlikely.
The reason for this radical remodeling of the surface could be due to volcanic activity, but NASA seems more interested in the possibility of cryovolcanism. An internal water ocean may have frozen in the past and resulted in changes in volume and mass distribution could have led to the surface cracking open and the formation of mountainous features.
We might learn more about the nature of Charon in the coming months. There is even higher-resolution Charon data still sitting in New Horizons, but it’s going to take months to get all that data back to Earth. The data link between NASA and the probe only runs at 1-2Kbps. In the meantime, NASA is looking at where to send New Horizons next. It has enough fuel left to take a closer look at another Kuiper Belt object. Maybe we’ll find more evidence out there to help unravel the mysteries of Charon.