Wednesday, June 19, 2013

Vine Will Survive!

Instagram is planning to launch video functionality in two days. But don’t go deleting Vine just yet. Before shoving Vine’s into the deadpool, let’s just calm it down a second.
Vine has been declared by many as the “Instagram for Video.” Instagram’s own video product is likely already too late to squash Vine like a bug. Heck, Facebook couldn’t even get Poke and Messenger off the ground after incumbents clobbered the space. What makes anyone think Instagram video would be any different?

Vine launched in January of this year, just after the holidays, and spent a few months ramping up the user base before launching on Android a few weeks ago. At the time, Vine had 13 million downloads. Not too shabby for approximately five months of work. It took Vine a few days to swing to the top of the App Store, and the same was true on Google Play following the Android launch.
When Instagram launched on Android, seventeen months after launching on iOS, it had around 30 million users. Obviously, users are a different metric than downloads, but you can see how Vine’sgrowth is relatively astounding given the timeframe. Especially when you factor in the less pointed evidence: Vine shares have surpassed Instagram shares on Twitter, for example, or even just hearing the term “Vine it” regularly in every day life. And having Twitter as a parent company doesn’t hurt either.
Vine is already established, and better yet, making waves. Vine was used by the Tribeca Film Festival for a special #6SecFilm Contest. The app has been toyed with by designers andadvertisers to build new interactive music videos. Brands love Vine because it lets products move in ways that Twitter and Facebook don’t.
And Vine, of course, is still iterating quickly. We’ve seen the team respond to feature requests like the ability to use front-facing camera as well as rear-facing camera, and I wouldn’t be suprisedt to see interesting additions like Voiceover or Animation pop up soon.
Instagram is a powerful foe. The app has over 100 million users, and is now owned by the most powerful social network in the world. But this is far from the end of Vine.
First, Vine is the end product of what Instagram was built to be. Vine skipped past still photos, and filters to make those photos (taken with bad mobile cameras) look prettier, and the slow grind of adding @mentions and photo maps and all those iterative feature tweaks.
Instead, Vine launched as a true Instagram for video, which now has an active and seemingly happy user base. It’s not Twitter’s Cleaner fish, even if Twitter bought up the app and launched it into existence (unlike Instagram’s organic growth that was later bought up by Facebook).
But where Instagram feels like a consumption app first (a time sink, almost), Vine doesn’t. Scrolling through my Vine stream is like having a hangover during an earthquake. Most often, it’s a lot of clanging and wind noise coupled with shaky video of my friends’ latest vacation.
Credit - Tech Crunch. Jordan Crook

Thursday, June 13, 2013

Why Google is the big data company that matters most

Google Image Search just got a whole lot better, and the company’s purpose-built machine learning system infrastructure is a big reason why. No surprise, Jeff Dean helped build it.

Google’s system can recognize flowers even when they’re not in the focal point.

Every now and then, someone asks “Who’ll be the Google of big data?”. The only acceptable answer, it seems, is that Google is the Google of big data. Yeah, it’s a web company on the surface, but Google has been at the forefront of using data to build compelling products for more than a decade, and it’s not showing any signs of slowing down.
Search, advertising, Translate, Play Music, Goggles, Trends and the list goes on — they’re all products that couldn’t exist without lots of data. But data alone doesn’t make products great — they also need to perform fast and reliably, and they eventually need to get more intelligent. Infrastructure and systems engineering make that possible, and that’s where Google really shines.
On Wednesday, the company showed off its chops once again, explaining in a blog post how it’s able to let users better search their photos because it was able to train some novel models on systems built for just that purpose. Here’s how Google describes the chain of events, after it had found the methods it wanted to test (from the winning team at the ImageNet competition):
“We built and trained models similar to those from the winning team using software infrastructure for training large-scale neural networks developed at Google in a group started by Jeff Dean and Andrew Ng. When we evaluated these models, we were impressed; on our test set we saw double the average precision when compared to other approaches we had tried. …
“Why the success now? … What is different is that both computers and algorithms have improved significantly. First, bigger and faster computers have made it feasible to train larger neural networks with much larger data. Ten years ago, running neural networks of this complexity would have been a momentous task even on a single image — now we are able to run them on billions of images. Second, new training techniques have made it possible to train the large deep neural networks necessary for successful image recognition.”

Of course Google had a system in place for training large-scale neural networks. And of course Jeff Dean helped design it. 

ean is among the highlights of our upcoming Structure conference (June 19 and 20 in San Francisco). I’m going to sit down with him in a fireside chat and talk about all the cool systems Google has built thus far and what’s coming down the pike next. Maybe about what life is like being the Chuck Norris of the internet.
From an engineering standpoint, Dean has been one of the most important people in the short history of the web. He helped create MapReduce — the parallel processing engine underneath Google’s original search engine — and was the lead author on the MapReduce paper that directly inspired the creation of Hadoop. Dean has also played significant roles in creating other important Google systems, such as its BigTable distributed data store (which is the basis of NoSQL databases such as Cassandra, HBase and the National Security Agency’s Accumulo) and a globally distributed transactional database called Spanner.
If you’re into big data or webscale systems, knowing what Dean is working on can be like looking into a crystal ball. When I asked Hadoop creator Doug Cutting what the future holds for Hadoop, he told me to look at Google.
Credit - Giga OM

Saturday, June 1, 2013

Google Won’t Approve Glass Apps That Recognize People’s Faces… For Now

The potential creep factor of Google Glass is something that the search giant has to mitigate as best it can if it wants that kooky head-worn display to become a mass-market sensation (and even that may not be enough), but a recent announcement highlights the search giant’s commitment to, well, do no evil.
Google confirmed on its official Glass G+ pageearlier this evening that it won’t allow developers to create applications for the head-worn display that are capable of recognizing the faces of people the wearer encounters.
It’s no surprise that Google has been keen to downplay the idea of first-party face recognition features — Google Glass director Steve Lee gave the New York Times a near identical statementearlier this month — but now the company has made it clear that developers are subject to that same code of conduct.
That’s not to say that Google is throwing out the possibility of face-recognizing Glass apps in the future — the company just has to lock down a firm set of privacy protocols before letting developers run wild. As you’d expect, there’s no timetable in place yet so it’s still unclear when Glass will be able to chime in our ears with a long-forgotten acquaintance’s name. It may be a big win for privacy advocates, but the news doesn’t bode all that well for some of the early-stage startups that are angling to turn Glass into an ever-present recognition device. Consider the case of Lambda Labs — earlier this week the San Francisco team talked up its forthcoming facial and object recognition API that would allow developers to create applications with commands like “remember that face.” At the time, Lambda co-founder Stephen Balaban sought refuge in the fact that the Glass API didn’t explicitly bar the creation of face-recognition apps, a shelter that no longer exists. To quote the updated Glass developer policies:
Don’t use the camera or microphone to cross-reference and immediately present personal information identifying anyone other than the user, including use cases such as facial recognition and voice print. Applications that do this will not be approved at this time.
For now, though, Google seems all right with the prospect of using Glass to recognize individual, people so long as their faces aren’t the things being kept track of. Back in March, news broke of apartially Google-funded project from Duke University that saw researchers create a Glass app that let users identify people not by their faces but by a so-called “fashion fingerprint” that accounts for clothing and accessories. All things considered, it’s a neat way to keep tabs on individual people with a privacy mechanism baked into our behavior — all you need to do to be forgotten is change your clothes.
Credit - TC

Sunday, May 26, 2013

Steering clear of the iceberg: three ways we can fix the data-credibilty crisis in science

Science has a data problem, There’s been a rash of experiments that no one can reproduce and studies that have to be retracted, But there are some nascent efforts to address this credibility crisis by changing the way the data is handled.

science has a data-credibility problem. There’s been a rash of experiments that no one can reproduce and studies that have to be retracted, all of which threatens to undermine the health and integrity of a fundamental driver of medical and economic progress. For the sake of the researchers, their funders and the public, we need to boost the power of the science community to self-correct and confirm its results.
In the eight years since John Ioannidis dropped the bomb that “most published research findings are false,” pockets of activist scientists from both academia and industry have been forming to address this problem, and it seems this year that some of those efforts are finally bearing fruit.

The research auditors

One interesting development is that a group of scientists is threatening to topple the impact factor, which ranks studies based on the journals in which they appear. This filter for quality research is based on journal prestige, but some scientists and startups are beginning to use alternative metrics in an effort to refocus on the science itself (rather than the publishing journal).
Taking a cue from the internet, they are citing the number of clicks, downloads, and page views that the research gets as better measures of “impact.” One group leading that charge is the Reproducibility Initiative, an alliance that includes an open-access journal (the Public Library of Science’s PLOS ONE) and three startups (data repository Figshare, experiment marketplace Science Exchange, and reference manager Mendeley). The Initiative isn’t trying to solve fraud, says Mendeley’s head of academic outreach William Gunn. Rather, it wants to address the rest of the dodgy data iceberg: the selective reporting of data, the vague methods for performing experiments, and the culture that contributes to so many scientific studies being irreproducible.
Stamp of ApprovalThe Initiative will leverage Science Exchange’s network of outside labs and contract research organizations to do what its name says: try to reproduce published scientific studies. They have 50 studies lined up for their first batch. The authors of these studies have opted in for the additional scrutiny, so there is a good chance much of their research will turn out to be solid.
Whatever the outcome, though, the Initiative wants to use this first test batch to show the scientific community and funders that this kind of exercise is value-adding despite the costs, which are estimated to be $20,000 per study (about 10% of the original research price tag, depending on the study).
Gunn likens the process to a tax audit: not all studies can or should be tested for reproducibility, but the likely offenders may be among those that have high “impact factors,” much like high-income earners with many deductions warrant suspicion.
A stumbling block may be the researchers themselves, who like many successful people have egos to protect; no one wants to be branded “irreproducible.” The Initiative stresses that the replication effort is about setting a standard for what counts as a good method, and finding predictors of research quality that supersede journal, institution or individual.

The plumbers and librarians of big data

While the Reproducibility Initiative is trying to accelerate science’s natural self-correction process, another nascent group is working on improving the plumbing that serves data. The Research Data Alliance (RDA), which is partially funded by the National Science Foundation, is barely a few months old, but it is already uniting global researchers who are passionate about improving infrastructure for data-driven innovation. “The superwoman of supercomputing” Francine Berman, a professor at Rensselaer Polytechnic Institute, heads up the U.S. division of RDA.
The RDA is structured like the World Wide Web Consortium, with working groups that produce code, policies for data interoperability, and data infrastructure solutions. As of yet there is no working group for data integrity, but it is within RDA’s scope, says Berman. While the effort is still in its infancy, the broad goals would be to come up with a way to make sure that the data contained in a study is more accessible to more people, and also that it doesn’t simply disappear at a certain point because of, say, storage issues.  She says with data it’s like we’re back in the  Industrial Revolution, when we had to create a new social contract to guide how we do research and commerce.
The men who stare at data
You can build places for data to live and spot-check it once it’s published, but there are also things researchers can do earlier, while they’re “interrogating” the data. After all, says Berman, you’re careful around strangers in real life, so why jump into bed with your data before you’re familiar with it?
Visualization is one of the most effective ways of inspecting the quality of your data, and getting different views of its potential. Automated processing is fast, but it can also produce spurious results if you don’t sanity-check your data first with visual and statistical techniques.
Stanford University computer scientist Jeff Heer, who also co-founded the data munging startup Trifacta, says visualization can help spot errors or extreme values. It can also test the user’s domain expertise (do you know what you’re doing and can you tell what a complete or faulty data set looks like?) and prior hypotheses about the data. “Skilled people are at the heart of the process of making sense of data,” says Heer. Someone with domain expertise who brings their memories and skills to the data can spot new insights, and in this way combat the determinism of blindly collected and reported data sets. Context, in the form of metadata, is rich and omni-present, Heer argues, as long as we’ve collected the right data the right way. Context can aid in interpretation and combat the determinism of blindly collected and reported data sets.
The three-pronged approach — better auditing, preservation and visualization — will help steer science away from the iceberg of unreliable data.
Credit - Giga Om.

Saturday, May 25, 2013

Jawfish Games Launches Its Real-Time, Multiplayer Platform For iOS, Android

Jawfish Games, a Seattle-based startup run by a former professional poker player and the engineering team that built the Fult Tilt Poker site, launched a gaming platform that can host more than 100,000 simultaneous players in real-time tournaments across iOS, Android and the web.
While asynchronous, turn-based games have done well on mobile platforms and Facebook over the last five years, pure, real-time multiplayer games haven’t caught on as quickly partially because data connections haven’t been fast enough and because a game developer would need a critical mass of players to match them synchronously.
But Jawfish, which has raised $3.65 million in funding from firms like Founders Fund’s angel fund, Right Side Capital and other angels, says it has built a platform to do just that. Their platform can support more than 100,000 simultaneous players and host 1 million tournaments for less than $10 in bandwidth.
They initially came out with a few games in partnership with Seattle’s Big Fish Games, but now they’re bringing out more of their own titles.
Because Jawfish’s CEO Phil Gordon is a championship professional poker career who has hosted The World Series of Poker and published five books on the game, the company is doing a poker game (of course). The poker game is designed to have the look and feel of a broadcasted game with Gordon’s running commentary throughout play.
They’ve also launched a basic word search game, called Jawfish Words, that lets players compete on the getting the highest scores, finding the longest words or the most diagonals. There more obscure goals too, like finding the most words with a single vowel. They launched that game last month through a partnership with Amazon. The company has pointed out some promising stats: the average player spends 21 minutes and plays 10.7 tournaments a day. Each tournament is about 60 to 90 seconds long.
They plan to building out a suite of classic games, from casual to casino titles that make use of the platform. “Basically what we’re looking to do is to take games that people know and love and reinvent them for multiplayer real-time tournaments,” Gordon said. “That’s exactly what we’re going to do across a wide spectrum of games.”
While Jawfish hasn’t opened its platform up to third-party developers, there are other gaming networks that add multi-player mode to indie titles that are blowing up. Nextpeer, an Israeli startup, went from having just a few games in its network to well over 1,000 developers in the last several months.
“Barring a top 10-kind of franchise wanting to use our platform for multiplayer mode, it’s incredibly unlikely that we’re going to work with other studios,” Gordon said. “Certainly not for anything but the top tier. We know that our platform is the only one of its kind in the world and we think that it’s in our interest to keep the platform close to the vest and develop our own games.”

Credit - Tech Crunch

Friday, May 24, 2013

HTC Desire 600 brings quad-core processor and BlinkFeed to the midrange

HTC is planning to release a mid-range device in the Desire range that will include a quad-core processor, 8-megapixel camera and its Sense 5 UI but has not yet confirmed that it will be headed to the UK.

HTC has announced the Desire 600, a mid-range dual-SIM Android Jelly Bean smartphone.

The handset was announced on Thursday, but HTC has yet to confirm whether it will be headed to the UK or the US.

The Desire 600 has some relatively high-end specs for a midrange phone, including a 1.2Ghz quad-core processor, dual SIM slots, an 8-megapixel rear camera and a 1.6-megapixel front-facing camera for stills or video calling.

The Desire 600 will also include other headline features found on the company's current 'hero' handset the HTC One. Among them will be the Sense 5 UI, which features the BlinkFeed news and social media feed, as well as the HTC BoomSound dual front-facing speakers and BeatsAudio software.

The Desire 600 also includes the Video Highlights software that automatically creates 30 second clips of highlights from footage stored on the phone.

A spokeswoman told ZDNet that the handset had been confirmed for release in Russia, Ukraine, the Middle East and Asia, but did not provide pricing details.

HTC reported its Q1 2013 results earlier this month, which revealed the company suffered a 98 percent decline in profits year on year. Overall sales declined more than 35 percent, due in part to fierce competition in the mobile space

Credit - Ben Woods

Thursday, May 23, 2013

Remote Control Apps for Android Devices

Assuming Direct Control!
It seems like the world is coming close to the point being able to control just about any household appliance from a single device. We're not quite there yet, but there are numerous apps out there that can help you turn your Android smartphone into a TV or media center remote, or even a remote controlfor your desktop! Check out this collection of smart TV remote appsmedia center remotes, andremote desktop apps for the couch potato in you.

Credit - Tom G.