Washington — as Washington does — is barreling towards a new reform plan designed to protect American innovation from overseas investors (which should really just be read as the Chinese these days). Earlier this week, congressional committees passed a measure designed to strengthen CFIUS, the Committee on Foreign Investment in the U.S., which we have written extensively about on TechCrunch. The bill would expand the powers of the committee to review transactions in more contexts, beyond its current mandate of looking only at changes in controlling interests.
Washington — as Washington does — has turned the debate, which was once deeply technical about the machinations of a mostly unknown government agency created during the Korean War, into a histrionic fight about the future of American innovation. Along the way, this classic DC dramatization threatens to rollback the robust market of Chinese venture capital flowing to Silicon Valley startups.
Limiting those flows of dollars to U.S.-based companies would be a tremendous mistake. Silicon Valley is the strongest innovation region in the world, and in no small part because of the robust venture capital dollars that fund risky startups to go big. While rules should be enacted to protect American intellectual property, Silicon Valley should be left alone to handle these problems in a more market-centric way.
Unfortunately, the histrionics of DC are already pushing this reform bill too far. To see this in action, let’s look at this in-depth Politico article that has been making the rounds on Capitol Hill this week. It’s title, “How China acquires ‘the crown jewels’ of US technology” already gets at its conclusion, but it is the section on venture capital that left me stupefied. Take this quote:
One major concern among specialists like Ware is that Beijing officials could use early Chinese investments in next-generation technology to map the software the federal government and even the Defense Department may one day use — and perhaps even corrupt it in ways that would give China a window into sensitive U.S. information.
“There’s a tremendous amount of intelligence value there,” Ware said. “All governments desire to know what other governments are doing. And knowing the technologies and how they work I think is a big part of that.”
Here’s the thing — the American government buys a lot of its products right out in the open through the procurement process. It actively signals what it is looking to buy at trade shows and in keynotes to encourage industry to build products that solve its problems. There exists an entire class of DC consultant that will tell you what the government is looking to buy in one, five, and ten years down the line. None of this is secret, nor should it be.
Now, that isn’t to say there isn’t confidential information that can be exchanged during the procurement process with sensitive agencies like the Defense Department. Obviously, integrating software with their existing systems will reveal a lot about the architecture of American national security computing, and the government has an interest in preventing the spread of that information widely.
The solution in my mind is not to block the hundreds of millions of dollars of Chinese venture capital flowing into the Valley, but rather to empower national security agencies to only work with contractors with clean equity. For instance, if a Chinese investor owns 5% of a startup, then that startup could no longer be eligible for sensitive government contracts. Clear rules here empowers startup founders to decide whether the capital they take is worth the potential loss of any government contracts that they may become ineligible for. In other words, there is a clear market dynamic that allows participants to decide what the benefit of capital is compared to the risk of intellectual property theft.
The other concern from Washington is that Chinese venture capitalists will get access to technical information as part of their investment. Again from Politico:
But Bryan Ware, CEO of Haystax Technology, which works with law enforcement, defense and intelligence clients on securing their technologies, cast some doubt on the idea that the owners of tech startups would naturally refuse to share details of their technology with their investors: “If you’ve got a Chinese investor and that’s the lifeblood that’s going to allow you to get your product out the door, or allow you to hire your next developer, telling them, ‘No, you can’t do that,’ or, ‘No you shouldn’t do that,’ while you have no other alternatives for financing — that’s just the nature of the dilemma.”
Ware’s solution to the dilemma is to just block the venture capital, thereby guaranteeing that the technology wouldn’t be built. You can’t steal intellectual property that hasn’t been invented!
Having worked in venture capital for a number of years, all I can say is that I have never seen venture capital investors ask for a level of technical information on an on-going basis that would be of any use in creating a competing engineering effort. The one time that most VCs even slightly care about the technical side of these businesses is during the due diligence process, when coding libraries need to be checked for copyright and some firms do further technical due diligence on the codebase to verify a team’s competence.
The due diligence tasks can be solved through trusted third-party intermediaries, which frankly is what most firms already use for these processes (there aren’t a lot of investors who also happen to be coders anyway). Furthermore, lawyers already make it abundantly clear about what information an investor can access through an investment, and that language can be even more stringent in cases where a Chinese investor is involved.
Rather than a federal block on investment (or just the friction that an interagency process creates), let the market handle this particular problem. Let me be frank: any CEO of a startup that would give all of their technical information willy-nilly to their investors or customers — Chinese or otherwise — is so laughably incompetent about trade secrets that I can’t imagine their business surviving long-term anyway. Every startup has to make a call on when to share technical details and when not to (for instance, should you share your technical stack with a foreign corporation who wants to buy your product but needs to verify GDPR compliance?), and getting sophisticated about sharing is critical for surviving in the cutthroat Silicon Valley market.
I am focused on minority-stake venture investments in Silicon Valley with this argument. Obviously, the rules can and should be very different in takeover situations, or in bankruptcies where the acquirer will receive the complete technical details of a company. I share the concerns of many analysts around how easy it can be to learn U.S. intellectual property, and I do believe the Chinese have a robust program to exploit the American economy’s openness.
But in our rush to try to plug this flow of information, let’s not lose sight of what actions are dangerous, and which are mutually beneficial for all parties involved. Venture capital gives companies the ability to hire workers (almost always domestically) and build products that can create impactful value for the economy. Washington — as Washington does — is taking these CFIUS reforms too far, and that risks undermining the very region that is the pinnacle of innovation in the world today.
Sony’s Playstation Vue, the over-the-top TV streaming service that’s now up against a host of new competitors including Hulu and YouTube TV, is expanding its lineup to include more local stations. While the service had already offered some limited access to locals in select markets, this expansion brings 200 more stations across the U.S. to its service, including ABC, CBS, FOX, and NBC stations.
In total, there are now more than 450 local stations available, the company says. (A list of the additions is available here.) It also today added ESPN College Extra.
The news is notable because of how far behind Playstation Vue has slid in terms of subscribers, since the launch of newcomers to the market. And many of these newcomers have been touting their access to locals as one of their benefits.
Playstation Vue, on the other hand, may have gained more locals this week, but it also recently lost all Sinclair-owned local stations, and before that, Viacom channels. While Sony says it doesn’t have plans to shut down Vue, it has also made statements about its “uncertain” future, which concerned its user base.
Likely because of its branding as “Playstation,” many consumers may believe that the service is something that’s only available to Playstation owners. It’s not, though – Vue also streams across platforms, including iOS, Android, the web, and connected media players like Apple TV, Roku, Android TV, Fire TV, as well as Chromecast.
While an early player in streaming TV, Playstation Vue today lags on subscribers.
Dish’s Sling TV leads the pack with 2.3 million paying customers, followed by AT&T’s DirecTV Now with 1.8 million. Meanwhile, the newer Hulu Live TV service hit 800,000 subscribers in May, while YouTube TV passed 800,000 around the same time. Playstation Vue, however, reportedly has over 500,000 subscribers, in comparison.
The major players are benefitting from their large corporate parents, Digiday recently pointed out. For example, AT&T acquired Time Warner and is now leveraging its wireless business to sells subscriptions. And Google can afford to market and fund YouTube TV as it grows, and has bought expensive partnerships like the World Series and NBA Finals along the way.
What Vue has going for it, is that the market itself – streaming – is growing, and its service is among one of the better-designed and more stable. But if it’s not willing to rebrand Playstation Vue into something more approachable, it may never be able to come out ahead.
As one Congress ends and another begins, many are looking forward to a rebalancing of power — especially in the House of Representatives, which Democrats handily retook in November. But FCC Chairman Ajit Pai is more pleased with what the House failed to do — namely, roll back his repeal of net neutrality rules.
To be fair, he does have reason to celebrate; no one likes to see their work undone. But a statement issued today tells a very selective truth about congressional opposition to his master plan.
“I’m pleased that a strong bipartisan majority of the U.S. House of Representatives declined to reinstate heavy-handed Internet regulation,” Pai said. The “heavy-handed” remark is the usual boilerplate in reference to 2015’s rules, which used what the current FCC calls “depression-era” regulations to exert control over internet providers. That aspersion doesn’t really make sense, as I’ve noted before.
These are the arguments against net neutrality and why they’re wrong
And the “strong bipartisan majority” bears a bit of explanation as well. Indeed, the Democrats fell about 30 short of the votes they needed to put the Congressional Review Act into effect and undo the FCC’s order. But that was only after the Senate, by a similar “strong bipartisan majority,” as Pai would no doubt put it, voted for the rollback. No mention of that in his statement.
In fact the CRA was a long shot from the beginning, but as Senator Brian Schatz (D-HI) told me shortly after the repeal, “it’s very important to try, and it’s important to get everybody in Congress on the record. We want every member of Congress to have to go on the record and say whether or not they agree with what the commission just did.”
Although there was no actual change to the rule, the forced votes of the CRA did succeed in exposing the stances of Senators and Representatives who had hitherto avoided the issue.
Commission Impossible: How and why the FCC created net neutrality
Pai followed this questionable bit of crowing with a litany of vague reasons the new rules should be kept. The internet, he points out, “has remained free and open. Broadband speeds are up… Internet access is also expanding, and the digital divide is closing.”
The former claim is, as always, being tested by internet providers, who continue to inject ads, block or throttle services, and otherwise interfere until customers and watchdogs call them out.
But the latter claim in particular would be disputed by many, especially since the FCC’s own numbers tracking broadband deployment in the U.S. have been widely mocked as inaccurate and sourced uncritically from an industry with a vested interest in overstating its own accomplishments.
Furthermore, it’s entirely unclear whether Pai’s new rules have had any positive influence at all. Broadband investment has in fact not been affected, despite a $2 billion tax break given to cable companies and a number of other sweetheart deals. The most likely explanation for any positive effects is investment planned or made years ago, perhaps as far back as the Obama administration and the previous rules.
On top of that, the new rules are under such close scrutiny and face several legal challenges that the industry would be foolish to let them affect their policies in anything but short-term matters. As happened with the 2015 rules, these could be gone in a year or two, or — with the Senate bullish on real net neutrality rules and a flipped House — replaced with actual legislation.
Mozilla called it a day with Firefox OS for mobile handsets back in 2015 and said it would test the waters for an IoT effort using some of the same technology (and it has). But that hasn’t spelled the complete end for the tech on mobile devices. Quietly, a company called KaiOS, built on a fork of Firefox OS, launched a new version of the OS built specifically for feature phones, and… Read More
Facebook’s chief legal officer Colin Stretch has announced he’ll be out by the end of the year.
In the inevitable Facebook post explaining why he’s moving on, Stretch writes that after he and his wife made a decision to move back to DC from California “a few years ago… we knew it would be difficult for me to remain in this role indefinitely”.
“As Facebook embraces the broader responsibility Mark [Zuckerberg] has discussed in recent months, I’ve concluded that the company and the Legal team need sustained leadership in Menlo Park,” he adds, saying he’ll stay to the end of the year to help with the transition.
Facebook has had a very awkward two years so far as politically charged scandals go. First revelations about the massive Kremlin-fueled election interference which it totally missed. Then the massive Cambridge Analytica data misuse debacle which Facebook also claims to have totally missed, even though it (still apparently) employs one of the academics whose quiz app was the vehicle used to suck out people’s data.
Since then a bunch of follow-on admissions have flowed from the company confirming that access to user data on its platform wasn’t as locked down as it’s historically liked to claim — albeit, despite masses of evidence to the contrary.
Nor, perhaps, as the FTC might have expected give a 2011 privacy settlement with the company. The regulator has now opened a fresh investigation. Meanwhile Facebook is carrying out a retrospective app audit — a not so tacit admission about its abject lack of enforcement of its own developer policy.
And yet there have not — at least publicly — been any heads rolling at Facebook despite all this failure.
Most likely because, as founder Mark Zuckerberg recently told Recode’s Kara Swisher during a podcast interview: “I designed the platform, so if someone’s going to get fired for this, it should be me.”
Of course Zuckerberg isn’t going to fire himself. Not when he doesn’t have to. Given the structure of the company he’s sitting pretty on his CEO throne, no matter how tarnished that crown now is.
Instead of firing himself — let’s not forget his 2016 attempts to dismiss the notion of Facebook-enabled election interference as a “pretty crazy idea” — Zuckerberg once again fired up his multi-year apology tour for privacy and data-related screw ups, rolling this through 2017 and 2018, as fresh scandals rocked the company’s reputation. And raised the specter of regulation to control damaging activity on the platform that the company has spectacularly failed to control.
Though you’d be hard pressed to read any of this scandalabra just by looking at the company’s earnings and stock price. Perhaps because investors view any regulation as likely to cement Facebook’s dominance, rather than upset the apple cart in a way that could allow a younger model to come in and disrupt its grip on consumers’ eyeballs.
Even so, 2018 has seen Zuckerberg, if not literally dragged but politically compelled to appear in front of US and EU lawmakers — where he faced a barrage of questions; some dumb, others cutting to the heart of the company’s contradictions and its contradictory claims.
Last year Facebook’s chief legal officer Colin Stretch was also in the Senate, alongside reps from Google and Twitter, fielding awkward questions about Russian election interference and the spread of extremist content on the platform.
There Stretch made an unfortunate slip of the tongue during his introductory remarks — seemingly saying “keeping people unsafe on Facebook is critical to our mission” before quickly correcting himself to stress he’d meant to say “keeping people safe”. As Freudian slips go it’s a doozy.
But it’s certainly not a great time for Facebook to be losing its general counsel. Not with so much ongoing political and legal risk. Although if Zuckerberg isn’t going to go then perhaps other Facebook veterans will feel compelled to leave on his behalf.
With the usual departing platitudes, Stretch writes: “This has not been an easy decision. Companies are made up of people, and the people here are talented, caring, and most of all committed to doing the right thing. Even now, eight-and-a-half years after I started, I often stop myself and ask how I got so lucky to be a part of this.”
“There is never a ‘right time’ for a transition like this, but the team and the company boast incredible talent and will navigate this well,” he adds.
In March it also emerged that Facebook would likely be parting ways with its long-time chief security officer, Alex Stamos, this summer — after the New York Times reported on internal disagreements between the CSO and other execs, saying Stamos had wanted Facebook to be more public about the misuse of its platform by nation states.
This week BuzzFeed News obtained an internal memo sent by Stamos in March, days after he had confirmed his plans to leave the company, in which he writes: “I was the Chief Security Officer during the 2016 election season, and I deserve as much blame (or more) as any other exec at the company.”
Though he demurs on confirming whether he has actually quit for real at that point — but does admit to having had “passionate discussions with other execs”, including, seemingly, about Facebook’s approach to sharing public data on Russian disinformation.
“The world has changed from underneath us in many ways. One change has been the thrusting of private tech companies into the struggle between nation-states,” he writes on this. “Traditionally, the standard has been to report malicious activity by adversary nations to US law enforcement. We are moving into a world where the major platforms are going to be expected to provide our findings, attribution and data directly to the public, making us a visible participant in the battle between cyberwarfare titans.”
“This is an uncomfortable transition, and have not always agree with the compromises we have struck in the process. That being said, I believe my colleagues have all approached the process in good faith, and together we have sorted through legitimate equities that needed to be weighed,” Stamos adds.
Stamos goes on to implore colleagues to make major changes “to win back the world’s trust” — including rethinking the metrics Facebook fixes itself to as a business; being more adversarial in its thinking when building products and processes; and — in what looks very much like a swipe at the company’s use of dark pattern design in its consent flows — re-engineering how it gathers user data to be more honest and minimize (rather than maximize) data collection.
On that it’s worth noting that privacy by design is a core plank of Europe’s new data protection framework, GDPR — which Stamos is seemingly describing at one point in the memo, without giving it a literal name-check.
“We need to build a user experience that conveys honesty and respect, not one optimized to get people to click yes to giving us more access. We need to intentionally not collect data where possible, and to keep it only as long as we are using it to serve people,” he writes [emphasis his]. “We need to find and stop adversaries who will be copying the playbook they saw in 2016. We need to listen to people (including internally) when they tell us a feature is creepy or point out a negative impact we are having in the world. We need to deprioritze short-term growth and revenue and to explain to Wall Street why that is ok. We need to be willing to pick sides when there are clear moral or humanitarian issues. And we need to be open, honest and transparent about challenges and what we are doing to fix them.”
Granted, most of today’s big announcements were iterative updates on devices we’ve seen in past years. Even so, Microsoft did sneak one surprise into today’s event. The simply titled Surface Headphones are perhaps the oddest addition to the line of laptop and desktop products.
The key to these over-the-ear headphones, however, is clearly Cortana. The company has had some issues helping spread its Siri/Alexa/Assistant competitor, so perhaps these devices with their next level of noise canceling will go a ways toward spreading that gospel.
Priced at $350, the wireless headphones should be competitive with the likes of Bose’s ubiquitous QuietComfort and competing offerings from companies like Sony and Samsung. Of course, if Cortana is the main distinguishing factor, it’s going to be hard for these products to truly stand out from the pack.
It’s still early days, and we don’t even have a release date (beyond “coming soon”), so perhaps the company still has a couple of tricks left up its sleeve.
A House Oversight Committee report out Monday has concluded that Equifax’s security practices and policies were sub-par and its systems were old and out-of-date, and bothering with basic security measures — like patching vulnerable systems — could’ve prevented its massive data breach last year.
It comes a little over a year after Equifax, one of the world’s largest credit rating agencies, confirmed its systems had fallen to hackers. Some 143 million consumers around the world were affected — most of which were in the U.S., but also Canada and the U.K. — with that figure later rising to 148 million consumers. Yet, to date, the company has faced almost no repercussions, despite a string of corporate failings that led to one of the largest data breaches in history.
The House report was scathing, criticizing the handling of the hack by Equifax’s former chief executive Richard Smith — who went on to “retire” following the breach.
Smith boasted that the credit giant held “almost 1,200 times” the data held in the Library of Congress every day, but the House report said that Equifax had “failed to implement an adequate security program to protect this sensitive data.”
“Such a breach was entirely preventable,” said the report.
The report confirmed most of what was already known, but added new color and insights that were previously unreported. The credit agency failed to patch a disclosed vulnerability in Apache Struts, a common open source web server, which Homeland Security had issued a warning about some months before. The unpatched Apache Struts server was powering its five-decades-old(!) web-facing system that allowed consumers to check their credit rating from the company’s website. The attackers used the vulnerability to pop a web shell on the server weeks later, and managed to retain access for more than two months, the House panel found, and were able to pivot through the company’s various systems by obtaining an unencrypted file of passwords on one server, letting the hackers access more than 48 databases containing unencrypted consumer credit data.
During that time, the hackers sent more than 9,000 queries on the databases, downloading data on 265 separate occasions.
Equifax’s former boss Smith passed the buck onto a single IT staffer for failing to patch the Struts system. In fact, it was just another example in the company’s cavalier attitude toward data security, the House report found.
“Equifax did not see the data exfiltration because the device used to monitor [the vulnerable server’s] network traffic had been inactive for 19 months due to an expired security certificate,” the report said. It took another two months for Equifax to update the expired certificate, at which point staff “immediately noticed suspicious web traffic.” Even Equifax’s own former chief information officer David Webb — who also “retired” following the incident — told House investigators that the whole incident could have been prevented had the company updated the vulnerable Struts system within two days of the patch’s release.
“Had the company taken action to address its observable security issues prior to this cyberattack, the data breach could have been prevented,” said the report.
Two more months later, Equifax went public. That was no picnic either.
When Equifax’s “are you at risk?” website wasn’t crashing, it was spewing out incorrect results. Then the site was quickly impersonated — and was inadvertently linked to by Equifax’s own social media staff. When concerned consumers finally got through to the site, they were offered Equifax’s own credit freezing service, which was kicking out weak PIN numbers — the one and only thing that was protecting consumers’ already fragile credit. The site was later pulled offline after another security researcher found a flaw in the credit freezing site that let an attacker siphon off sensitive consumer data. This was all while its call centers were overloaded, and many struggled to get basic questions answered.
In all, the House report didn’t hold back its critique — slamming the credit rating agency’s poor security practices, especially given the data involved — which the report noted that consumers do not “have the ability to opt out of this information collection process.”
Equifax’s response to the House’s report? Go on the defensive.
“We are deeply disappointed that the Committee chose not to provide us with adequate time to review and respond to a 100-page report consisting of highly technical and important information,” said Equifax spokesperson Wyatt Jefferies. “During the few hours we were given to conduct a preliminary review we identified significant inaccuracies and disagree with many of the factual findings,” the statement continued.
“This is unfortunate and undermines our hope to assist the Committee in producing a credible and thorough public resource for those who wish to learn from our experience managing the 2017 cybersecurity incident,” the statement continued.
When TechCrunch asked for those “significant inaccuracies,” the spokesperson returned with a bulleted list of “factual errors” — or nit-picks — rather than pointing out substantial discrepancies with the report — including that Equifax offered two years of credit monitoring and not one year as was stated in the report, and that the report referenced an apparent settlement with a state attorney general that has not occurred.
A year later, Equifax lost your data but faced little fallout
It’s been a bit of a tumultuous week, to put it lightly, but one must always remember that no matter how dire things look on the global stage, there are always makers working obsessively to create something beautiful and useless — like this MIDI-driven, robotic music box.
Tinkerer and music box aficionado Mitxela (via Hackaday) was pleased by this music box that takes punch cards or rolls as input, rather than having a metal drum with the notes sticking out of it. But who wants to punch cards all day to make a music box go? These things are supposed to be simple!
Mitxela first made a script that takes a MIDI file and outputs an image compatible with his laser cutter, allowing cards or paper strips to be created more or less automatically. But then there’s the question of wear and tear, storing the strips, taping them together for long pieces… why not just have the MIDI controller drive the music box directly?
It clearly took some elbow grease, but he managed to create a lovely little machine that does just that. The MIDI pattern maps to a set of small servos, each of which is attached to a rigid brass wire and plastic tip. When the servo activates, the tip pushes the corresponding little cylinder in the music box, producing a note.
Now MIDI files (single-instrument ones, anyway) can be played directly. But there’s more! Mitxela’s efforts to lower the power draw and simplify the mechanisms had the incidental side effect of lowering the latency so much that you can even play the music box in real time using a MIDI keyboard. How delightful!
The video has quite a few breaks to listen to video game themes, so if you’re just interested in the device, you can skip through to the (relatively) technical parts. But hearing the Mario theme tinkling through a neat little gadget like this isn’t the worst way to spend a Friday afternoon after a week like this one.
You can check out the rest of Mitxela’s little hardware projects at his website.
Perhaps the most surprising thing I learned about Signal when I spoke with Moxie Marlinspike, the app’s creator, last year at Disrupt, was that it was essentially running on a shoestring budget. A tool used by millions and feared by governments worldwide, barely getting by! But $50M from WhatsApp founder Brian Acton should help secure the app’s future. Read More
Tesla has reached a deal with the Shanghai government to build a factory capable of producing 500,000 electric vehicles a year.
The factory would be the automaker’s second assembly plant and aimed at serving the alluring Chinese market. Tesla and the Shanghai Municipal People’s Government announced Tuesday they had signed the cooperative agreement.
Tesla announced last year it was working with the Shanghai municipal government to explore the possibility of establishing a factory in the region. Construction on the factory, which the company has dubbed Gigafactory 3, is expected to begin “in the future after we get all the necessary approvals and permits,” a Tesla spokesman told TechCrunch in an emailed statement.
“From there, it will take roughly two years until we start producing vehicles and then another two to three years before the factory is fully ramped up to produce around 500,000 vehicles per year for Chinese customers,” the spokesman said.
Tesla hasn’t provided an estimate of what the factory might cost to build. That’s a critical data point for Tesla, which has been burning through cash as it tries to ramp up production of its Model 3 vehicle.
Still, the deal is a milestone for Tesla and Musk, who has long viewed China as a crucial market. It’s also notable because this will be a wholly owned Tesla factory, not a traditional joint venture with the Chinese government. Foreign companies have historically had to form a 50-50 joint venture with a local partner to build a factory in China.
Chines President Xi Jinping has pushed forward plans to phase out joint-venture rules for foreign automakers by 2022. Tesla is one of the first beneficiaries of this rule change.
Tesla is particularly exposed to escalating trade tensions between China and the U.S. because the company doesn’t have a factory in China, unlike other automakers such as BMW, Ford Motor and GM. Tesla builds its electric sedans and SUVs at its factory in Fremont, Calif. and ships them to China, which subjects the vehicles to an import tariff.
China raised its tariff on auto imports from the U.S. to 40 percent in retaliation against the Trump administration’s decision to put additional duties on Chinese-made goods, forcing Tesla to raise prices on its electric vehicles there.
“Shanghai will be the location for the first Gigafactory outside the United States,” Tesla CEO Elon Musk said in a statement. “It will be a state-of-the-art vehicle factory and a role model for sustainability. We hope it will be completed very soon. We’ve been impressed by the beauty and energy of Shanghai and we want our factory to add to that.”
There is a familiar trope in Hollywood cyberwarfare movies. A lone whiz kid hacker (often with blue, pink, or platinum hair) fights an evil government. Despite combatting dozens of cyber defenders, each of whom appears to be working around the clock and has very little need to use the facilities, the hacker is able to defeat all security and gain access to the secret weapon plans or whatever have you. The weapon stopped, the hacker becomes a hero.
The real world of security operations centers (SOCs) couldn’t be further from this silver screen fiction. Today’s hackers (who are the bad guys, by the way) don’t have the time to custom hack a system and play cat-and-mouse with security professionals. Instead, they increasingly build a toolbox of automated scripts and simultaneously hit hundreds of targets using, say, a newly discovered zero-day vulnerability and trying to take advantage of it as much as possible before it is patched.
Security analysts working in a SOC are increasingly overburdened and overwhelmed by the sheer number of attacks they have to process. Yet, despite the promises of automation, they are often still using manual processes to counter these attacks. Fighting automated attacks with manual actions is like fighting mechanized armor with horses: futile.
Nonetheless, that’s the current state of things in the security operations world, but as V.Jay LaRosa, the VP of Global Security Architecture of payroll and HR company ADP explained to me, “The industry, in general from a SOC operations perspective, it is about to go through a massive revolution.”
That revolution is automation. Many companies have claimed that they are bringing machine learning and artificial intelligence to security operations, and the buzzword has been a mainstay of security startup pitch decks for some times. Results in many cases have been nothing short of lackluster at best. But a new generation of startups is now replacing soaring claims with hard science, and focusing on the time-consuming low-hanging fruit of the security analyst’s work.
One of those companies, as we will learn shortly, is JASK. The company, which is based in San Francisco and Austin, wants to create a new market for what it calls the “autonomous security operations center.” Our goal is to understand the current terrain for SOCs, and how such a platform might fit into the future of cybersecurity.
Data wrangling and the challenge of automating security
The security operations center is the central nervous system of corporate security departments today. Borrowing concepts from military organizational design, the modern SOC is designed to fuse streams of data into one place, giving security analysts a comprehensive overview of a company’s systems. Those data sources typically include network logs, an incident detection and response system, web application firewall data, internal reports, antivirus, and many more. Large companies can easily have dozens of data sources.
Once all of that information has been ingested, it is up to a team of security analysts to evaluate that data and start to “connect the dots.” These professionals are often overworked since the growth of the security team is generally reactive to the threat environment. Startups might start with a single security professional, and slowly expand that team as new threats to the business are discovered.
Given the scale and complexity of the data, investigating a single security alert can take significant time. An analyst might spend 50 minutes just pulling and cleaning the necessary data to be able to evaluate the likelihood of a threat to the company. Worse, alerts are sufficiently variable that the analyst often has to repeatedly perform this cleanup work for every alert.
Data wrangling is one of the most fundamental problems that every SOC faces. All of those streams of data need to be constantly managed to ensure that they are processed properly. As LaRosa from ADP explained, “The biggest challenge we deal with in this space is that [data] is transformed at the time of collection, and when it is transformed, you lose the raw information.” The challenge then is that “If you don’t transform that data properly, then … all that information becomes garbage.”
The challenges of data wrangling aren’t unique to security — teams across the enterprise struggle to design automated solutions. Nonetheless, just getting the right data to the right person is an incredible challenge. Many security teams still manually monitor data streams, and may even write their own ad-hoc batch processing scripts to get data ready for analysis.
Managing that data inside the SOC is the job of a security information and event management system (SIEM), which acts as a system of record for the activities and data flowing through security operations. Originally focused on compliance, these systems allow analysts to access the data they need, and also log the outcome of any alert investigation. Products like ArcSight and Splunk and many others here have owned this space for years, and the market is not going anywhere.
Due to their compliance focus though, security management systems often lack the kinds of automated features that would make analysts more efficient. One early response to this challenge was a market known as user entity behavior analytics (UEBA). These products, which include companies like Exabeam, analyze typical user behavior and search for anomalies. In this way, they are meant to integrate raw data together to highlight activities for security analysts, saving them time and attention. This market was originally standalone, but as Gartner has pointed out, these analytics products are increasingly migrating into the security information management space itself as a sort of “smarter SIEM.”
These analytics products added value, but they didn’t solve the comprehensive challenge of data wrangling. Ideally, a system would ingest all of the security data and start to automatically detect correlations, grouping disparate data together into a cohesive security alert that could be rapidly evaluated by a security analyst. This sort of autonomous security has been a dream of security analysts for years, but that dream increasingly looks like it could become reality quite soon.
LaRosa of ADP told me that “Organizationally, we have got to figure out how we help our humans to work smarter.” David Tsao, Global Information Security Officer of Veeva Systems, was more specific, asking “So how do you organize data in a way so that a security engineer … can see how these various events make sense?”
JASK and the future of “autonomous security”
That’s where a company like JASK comes in. Its goal, simply put, is to take all the disparate data streams entering the security operations center and automatically group them into attacks. From there, analysts can then evaluate each threat holistically, saving them time and allowing them to focus on the sophisticated analytical part of their work, instead of on monotonous data wrangling.
The startup was founded by Greg Martin, a security veteran who previously founded threat intelligence platform ThreatStream (now branded Anomali). Before that, he worked as an executive at ArcSight, a company that is one of the incumbent behemoths in security information management.
Martin explained to me that “we are now far and away past what we can do with just human-led SOCs.” The challenge is that every single security alert coming in has to go through manual review. “I really feel like the state of the art in security operations is really how we manufactured cars in the 1950s — hand-painting every car,” Martin said. “JASK was founded to just clean up the mess.”
Machine learning is one of these abused terms in the startup world, and certainly that is no exception in cybersecurity. Visionary security professionals wax poetic about automated systems that instantly detect a hacker as they attempt to gain access to the system and immediately respond with tested actions designed to thwart them. The reality is much less exciting: just connecting data from disparate sources is a major hurdle for AI researchers in the security space.
Martin’s philosophy with JASK is that the industry should walk before it runs. “We actually look to the autonomous car industry,” he said to me. “They broke the development roadmap into phases.” For JASK, “Phase one would be to collect all the data and prepare and identify it for machine learning,” he said. LaRosa of ADP, talking about the potential of this sort of automation, said that “you are taking forty to fifty minutes of busy work out of that process and allow [the security analysts] to get right to the root cause.”
This doesn’t mean that security analysts are suddenly out of a job, indeed far from it. Analysts still have to interpret the information that has been compiled, and even more importantly, they have to decide on what is the best course of action. Today’s companies are moving from “runbooks” of static response procedures to automated security orchestration systems. Machine learning realistically is far from being able to accomplish the full lifecycle of an alert today, although Martin is hopeful that such automation is coming in later phases of the roadmap.
Martin tells me that the technology is being used by twenty customers today. The company’s stack is built on technologies like Hadoop, allowing it to process significantly higher volumes of data compared to legacy security products.
JASK is essentially carving out a unique niche in the security market today, and the company is currently in beta. The company raised a $2m seed from Battery in early 2016, and a $12m series A led by Dell Technologies Capital, which saw its investment in security startup Zscaler IPO last week.
There are thousands of security products in the market, as any visit to the RSA conference will quickly convince you. Unfortunately though, SOCs can’t just be built with tech off the shelf. Every company has unique systems, processes, and threat concerns that security operations need to adapt to, and of course, hackers are not standing still. Products need to constantly change to adapt to those needs, which is why machine learning and its flexibility is so important.
Martin said that “we have to bias our algorithms so that you never trust any one individual or any one team. It is a careful controlled dance to build these types of systems to produce general purpose, general results that applies across organizations.” The nuance around artificial intelligence is refreshing in a space that can see incredible hype. Now the hard part is to keep moving that roadmap forward. Maybe that blue-haired silver screen hacker needs some employment.
Karma, the Stockholm-based startup that offers a marketplace to let local restaurants and grocery offer unsold food at a discount, has raised $12 million in Series A funding.
Swedish investment firm Kinnevik led the round, with participation from U.S. venture capital firm Bessemer Venture Partners, appliance manufacturer Electrolux, and previous backer VC firm e.ventures. It brings total funding to $18 million.
Founded in late 2015 by Hjalmar Ståhlberg Nordegren, Ludvig Berling, Mattis Larsson and Elsa Bernadotte, and launched the following year, Karma is an app-based marketplace that helps restaurants and grocery stores reduce food waste by selling unsold food at a discount direct to consumers.
You simply register your location with the iOS or Android app and can browse various food merchants and the food items/dishes they have put on sale. Once you find an item to your liking, you pay through the Karma app and pick up the food before closing time. You can also follow your favourite establishments and be alerted when new food is listed each day.
“One third of all food produced is wasted,” Karma CEO Ståhlberg Nordegren tells me. “We’re reducing food waste by enabling restaurants and grocery stores to sell their surplus food through our app… Consumers like you and me can then buy the food directly through the app and pick it up as take away at the location. We’re helping the seller reduce food waste and increase revenue, consumers get great food at a reduced price, and we help the environment redistributing food instead of wasting it”.
Since Karma’s original launch in its home country of Sweden, the startup has expanded to work with over 1,500 restaurants, grocery stores, hotels, cafes and bakeries to help reduce food waste by selling surplus food to 350,000 Karma users. It counts three of Sweden’s largest supermarkets as marketplace partners, as well as premium restaurants such as Ruta Baga and Marcus Samuelsson’s Kitchen & Table, and major brands such as Sodexo, Radisson and Scandic Hotels.
In February, the company expanded to the U.K., and is already working with over 400 restaurants in London. They include brands such as Aubaine, Polpo, Caravan, K10, Taylor St Barista’s, Ned’s Noodle Bar, and Detox Kitchen.
Ståhlberg Nordegren says Karma’s most frequent users are young professionals between the age of 25-40, who typically work in the city and pick up Karma on their way home. “Students and the elderly also love the app as it’s a great way to discover really good food for less,” he adds.
Meanwhile, will use the funding to continue to develop its product range, especially within supermarkets, and to expand to new markets, starting with Europe. The company plans to expand from 35 people based in Stockholm today to over 100 across 5 markets by the end of next year and over 150 by mid 2020.