The Two Faces of Open Source: ECT News Roundtable, Episode 5

The Two Faces of Open Source: ECT News Roundtable, Episode 5

The open source software movement has evolved dramatically over the past two decades. Many businesses that once considered open source a threat now recognize its value.

On the other hand, in spite of increased enthusiasm among enterprises, consumer interest by and large has not materialized.

With large companies increasingly embracing open source, what does it mean to be a part of the free and open source software, or FOSS, “community”?

Why have consumers been so slow to adopt open source software?

Our roundtable of industry insiders tackled those questions during their lengthy virtual conversation on technology trends.

Participants in the discussion were Rob Enderle, principal analyst at the Enderle Group; Ed Moyle, partner at SecurityCurve; Denis Pombriant, managing principal at the Beagle Research Group; and Jonathan Terrasi, a tech journalist who focuses on computer security, encryption, open source, politics and current affairs.

A Chaotic Community

For enterprises, adoption of open source “means better control and understanding of the code they use and how it is progressing,” said Rob Enderle. “In effect, it lowers their operational risk IF they properly fund the effort. If they don’t, it results in a bigger exposure due to the misuse of these tools.”

“Big companies have taken on fully commoditized infrastructure tech. That’s normal and it will continue,” said Denis Pombriant.

Being part of the FOSS community “really just means providing value back to others in whatever way you’re able,” offered Ed Moyle. “This can be as a developer but also as a user, tester or financial supporter. Anyone providing value back — small or large — is part of the community.”

Jonathan Terrasi, Tech Journalist

The freedom in adapting open source to one’s own particular needs — one of the hallmark principles of the FOSS movement — can pose problems for a community that is at best loosely knit.

“Considering the degree to which bigger tech companies with traditionally proprietary models are incorporating open source projects, the FOSS community looks to be on course for a schism,” warned Jonathan Terrasi.

“The Linux Foundations and the Red Hats of the community will likely keep progressing in the direction they’re headed, while smaller scrappier projects with more ideological grounding in FOSS will eschew those projects and go their own way,” he predicted.

“With each set on their own course, their challenges will be different,” Terrasi continued.

“In the former case — that of the standouts in FOSS, like Linux — their job will increasingly center on balancing the demands of very different clients, such as Microsoft and Google in Linux’s case,” he said.

“For the latter case, the obstacles are less foreseeable, but will probably have to do with keeping from getting starved for oxygen when their larger cousins scoop up most of the corporate investment,” Terrasi speculated.

Those cautionary notes aside, “now is a really exciting time in open source,” maintained Moyle.

“Everything DevOps is open source: Source control (git), CI/CD (jenkins, ansible), containers (docker, rkt), orchestration (kubernetes). Desktop Linux is more viable than it’s ever been, and Linux is the primary cloud platform, by a wide margin. That’s the positive side, and it’s tremendously exciting,” he said.

“The less positive side, though, is that there’s a trend of ‘faux-pen source’ projects out there that seem to be increasing in prevalence,” observed Moyle.

“By this I mean one of two things: 1) projects that claim to be open source and are marketed or hyped that way, but that have bogus — that is non-free — licenses; or 2) where the source is technically open, but the functionality is broken in some fundamental way unless you pay someone money,” he explained.

“I have NO problem with someone wanting to make a buck for their work,” Moyle emphasized.

Ed Moyle, Security Advisor

“For example, a company charging for support, charging for additional data and services — in the security world, for example, charging for signatures/rules, etc. These are all reasonable to me,” he said.

“But it does really irritate me when something is released as ‘open source,’ seemingly for marketing purposes, but in order for it to do anything useful you need to pay someone money. While such an offering might adhere to a strict reading of a free license, it’s hard to argue that it’s in keeping with the ‘free and open’ community-based nature of open source. It seems disingenuous to me,” Moyle said.

“I had never heard of the ‘faux-pen source’ moniker, but it is a pithy term for a real phenomenon,” Terrasi responded.

“I wholly agree that people should be paid for their work, but as Ed put it, it is disingenuous to lean on the goodwill that open source communities strive to engender as a way of deriving revenue,” he added.

“There is no doubt that open source software enjoys better representation now than it ever has — even the Linux desktop — but this may owe partly to the exploitation of FOSS’ conspicuous weaknesses of being free and open,” Terrasi suggested.

“Because they are open, they invite any code contribution of sufficient merit, and because developers need to eat, they invite any monetary contribution whenever feasible,” he pointed out.

“However, larger companies exploit this to swoop in, colonize the code and/or funding base, and then take control of the project from within. A recent article about Twitter’s Bluesky project quoted experts who warned of exactly that phenomenon,” Terrasi said. “The challenge going forward will be for FOSS projects to reconcile continuing to exist with preserving the integrity of their mission.”

Consumer Resistance

There’s a simple reason for low consumer interest in open source software, suggested Enderle.

“They aren’t coders,” he said.

Denis Pombriant, Author, Analyst and Consultant

Open source is “mostly infrastructure,” noted Pombriant.

“Customers still need service, and therefore adhere to brands and their support. Open source is problematic from a business model approach and from a customer service one,” he maintained.

“As far as the open source revolution has come, there are still pervasive misconceptions surrounding it,” said Terrasi.

“There is still the unfounded but stubborn perception on the part of the consumer that open source software is insecure, that because it’s ‘free’ the quality is inferior — in the vein of the old adage ‘you get what you pay for’ — and that it’s not as flashy or glossy,” he continued.

“I think most developers know better than to buy into these myths, but because their customers don’t, they’re not going to try to deliver open source products over their customers’ objections,” Terrasi reasoned.

“Frankly, there have been, and still are, significant barriers to entry for many FOSS tools,” Moyle pointed out.

“For example, I use Linux as my primary platform and it’s a nonstop PITA — I say this lovingly,” he said.

“The ongoing challenges are legion. For example, the integrated fingerprint scanner on my laptop doesn’t work — no drivers. I’ve had to tweak the BIOS to get it to run appropriately. I’ve had to write code to get dual monitors to work, make changes to support HDMI audio output, etc.,” Moyle said.

“For many orgs, the hassle factor of having to deal with these tweaks yourself — not just in Linux, but in any FOSS that is primarily community driven — is more expensive than paying a vendor for a COTS alternative. This is why you see such an uptick in open source that has industry backing: docker, SaltStack, Kubernetes, etc. — because that minimizes the hassle,” he explained.

Rob Enderle, Tech Analyst

“For me it still comes down to consumers not having, and not wanting to develop, the needed skills,” Enderle said.

“Working on things is becoming a lost art. I’ve had kids ask me what an air cleaner on a car is. To the younger generations, much of what they get is kind of like magic. It just works, and they don’t really care how until it doesn’t — and then they only want someone else to fix it. Granted, with some of the newer complex technical products that is probably the safer path,” he added.

“My Linux use has very seldom required anything so drastic as what Ed has encountered, but I know that it can definitely break down like that,” said Terrasi.

“Linux has come a long way, and there are definitely distros that are as stable as any consumer would expect their operating system to be, but the bad press from Linux’s Wild West days has taken its toll,” he noted.

“Also, at least with open source OSes — namely Linux — I think there’s just a real apprehension about changing one’s system that fundamentally. There’s this idea many users have that the developers who made the device know best and have your best interests at heart, so you shouldn’t contravene them by installing your own OS,” Terrasi observed.

“It’s this deference to authority — in a specific context — that is weirdly dissonant with a social climate right now where perceived ‘elites’ are distrusted in favor of the expressed will of the community of non-elites, but it’s a real thing and you see it every day like in the way people flock to the Apple Genius Bar and unconditionally trust the intentions of a roughly (US)$1 trillion company,” he pointed out.

“Open source on the whole, and Linux in particular, are never going to enjoy any home consumer market share to speak of until that misconception is overcome,” Terrasi maintained.

Aside from the technical difficulties consumers may encounter with open source, there’s the issue of visibility. Many consumers may not even be familiar with the term, much less with what it means.

“It’s difficult for open source projects to market and advertise the same way that closed-source technology vendors do,” noted Moyle.

“It’s also difficult for them to use the same techniques to gain marketshare — for example, establishing VAR arrangements or channel partnerships,” he said.

“From an end-user point of view, the support experience is a whole different ballgame. If a commercial product doesn’t work or has an issue, you can work with someone directly to solve the problem,” Moyle said.

“In the open source world, the onus is on you in many cases to solve your own problem with support from the broader community. This can be a tall hill to climb for someone with little or no technical expertise,” he pointed out.

“I concede that the lack of support is a a genuine and understandable barrier,” said Terrasi.

“I don’t see Canonical setting up ‘Einstein Lounges’ anytime soon. I do take some solace in the fact that we live in an age where no one makes a purchase without reading numerous online reviews and, jointly, in the fact that some of the beginner-friendly Linux distros have welcoming and knowledgeable communities who want newcomers to stick around,” he added.

“I’m not proclaiming the Year of the Linux desktop anytime soon,” Terrasi said, “but taking Linux as an example, there are some things that open source projects are doing right to attract users from the mainstream consumer base.”

Mick Brady is managing editor of ECT News Network.

Oops – Google May Have Sent Your Embarrassing Private Video to a Stranger

Oops – Google May Have Sent Your Embarrassing Private Video to a Stranger

Google misdirected a number of private videos that users of its Google Photos app intended to back up to Google Takeout, sending them instead to strangers’ archives, 9to5 Google reported Monday.

The company emailed affected users to inform them that a technical issue caused the error, which incorrectly transferred videos for several days before it was fixed.

Google recommended that affected users back up their content again and delete their previous backup. They were advised to contact Google Support for further assistance.

Google Photos passed the 1 billion user mark last summer.

Although it said just 0.01 percent of users were affected, Google did not indicate whether that percentage applied to Google Photos or Google Takeout users.

“Google did fix the issue quickly,” acknowledged Erich Kron, security awareness advocate at KnowBe4.

“However, the notification process to those impacted was less than satisfactory and left out a lot of details, leaving those possibly impacted unsure of what the exposure risks were for them,” he told TechNewsWorld. “When dealing with an issue that impacts privacy in the way that improperly sending files as sensitive as photos and videos is, the communication needs to be very clear and informative.”

Google’s notification “reads like they really don’t care about what happened to the users, and that could backfire badly with organizations like the European Commission,” noted Rob Enderle, principal analyst at the Enderle Group.

The issue “highlights the challenge with protecting and managing personal photos and videos,” said Josh Bohls, founder of Inkscreen.

People use their mobile devices to scan business documents, and they use a broad range of photos, video and audio for everyday tasks that drive business processes, he told TechNewsWorld.

“If you work for a law firm, healthcare provider, insurance company, or in another regulated industry and take photos or record videos as part of your job, your company should strongly consider a solution to protect and manage this content — especially if you use Google Photos,” Bohls said.”

Fear and Anger

The problem “shouldn’t happen at all, and it once again points to Google as a firm that can’t be trusted with your data,” Enderle told TechNewsWorld.

“If the video content was sensitive and private, then you could have a violation of the GDPR or California’s CCPA, remarked Mike Jude, research director at IDC. “That sort of thing could trigger fines and remedial action.”

Google’s failure to disclose who wrongly received videos could lead to more trouble for the company, Enderle pointed out. “Users should have a right to that information, and they likely could sue Google to get it. Then, depending on what’s in the video, sue them for damages.”

Any indemnification clause in the user agreement might not protect Google because the issue was due to negligence, he said. “I wouldn’t be surprised if we saw a class action suit come out of this.”

While the victims can file suit, or file a complaint under applicable privacy laws, it could backfire on them, IDC’s Jude told TechNewsWorld.

“In the case of provocative material, the temptation would be to pay the ransom rather than face public disclosure,” he said.

By the Numbers

“It is possible that thousands were impacted,” Jude remarked. “It wouldn’t pay for Google to announce something like this unless it had a pretty wide reach.”

The issue “could be quite serious for those affected,” said Paul Bischoff, privacy advocate at Comparitech.

However, the scale of the problem depends on who really was affected, he told TechNewsWorld.

Google pinned that number at 0.01 percent, but “do they mean 0.01 percent of Takeout users or of Photo users?” Bischoff asked. “The former would be a much smaller number.”

Further, the leaked videos went to other users, not malicious actors, he noted, and “it was not intentional on Google’s part. For me, those two facts make this less of a big deal.”

If Google had let an attacker hack its systems or had been hiding a nefarious practice, its privacy or security standards would be called into question, Bischoff said, but “bugs happen, and I think people are more forgiving for that sort of thing.”

What Google Can or Should Do

Google “should do whatever it takes to secure the mis-sent videos,” Enderle recommended.

“It probably won’t be enough, but if they wait for regulatory action, the result could be very expensive,” he warned.

“Ethically, Google should help them,” said IDC’s Jude. “Would they? Probably not, unless there’s some explicit guarantee that the data stored with Google is secure.”

Google could offer identity theft protection for the victims, “but there’s not much it can do until the damage is done,” Comparitech’s Bischoff noted.

If it can find out exactly which videos and photos were sent incorrectly, Google “should absolutely inform the owners of what was compromised,” Bischoff recommended. It might step in as a mediator to protect both parties’ privacy in case any victims wanted to communicate with those who received their videos by mistake.

Google “is a free service, more or less, that provides access in exchange for looking over your shoulder as you use the service,” Jude remarked. “It is not a public commons, and there really should be no expectation of privacy.”

Users should opt for a paid storage service, suggested Enderle, while Jude said storing videos and photos locally might be a better option.

“I saw a 2-TB SSD the other day for (US)$69,” he said. “Back when I was in college, I saw an article in the magazine ‘Datamation’ that said the total computer storage of the planet was about 1 TB.”

Richard Adhikari has been an ECT News Network reporter since 2008. His areas of focus include cybersecurity, mobile technologies, CRM, databases, software development, mainframe and mid-range computing, and application development. He has written and edited for numerous publications, including Information Week and Computerworld. He is the author of two books on client/server technology. Email Richard.

Bridging the IoT Innovation-Security Gap

Bridging the IoT Innovation-Security Gap

There is a problem with the Internet of Things: It’s incredibly insecure.

This is not a problem that is inherent to the idea of smart devices. Wearables, smart houses, and fitness tracking apps can be made secure — or at least more secure than they currently are.

The problem, instead, is one that largely has been created by the companies that make IoT devices. Many of these devices are manufactured by relatively small, relatively new companies with little expertise when it comes to cybersecurity. Even large companies, however, and even those that produce thousands of hackable smart TVs a year, cannot be forgiven so easily.

In truth, when it comes to the Internet of Things, many companies have prioritized connectivity and “innovation” (read popular but insecure features) over cybersecurity.

These approaches have led to a variety of security vulnerabilities in IoT devices.

Insufficient Testing and Updating

Perhaps the biggest problem when it comes to the cybersecurity of IoT devices is that many companies simply don’t support them after release. In fact, many IoT devices don’t even have the capability of being updated, even against the most common types of cyberattack.

This means that even a device that was secure when it was released quickly can become highly vulnerable. Manufacturers often are more focused on releasing their new device than on spending time to patch “historic” security flaws. This attitude can leave these devices in a permanently insecure state.

Failing to update these devices is a huge problem — and not just for consumers who have their data stolen. It also means that a company’s devices can fall victim to a single, large cyberattack that could ruin their reputation, and erase their profitability.

Default Passwords

A second major — and avoidable — problem with IoT devices is that they ship with default passwords, and users are not reminded to change them in order to secure their home IoT networks. This is despite industry and government-level advice against using default passwords.

This vulnerability led to the highest-profile IoT hack to date, the Mirai botnet, which compromised millions of IoT devices by the simple method of using their default passwords.

Though some UK-based Web hosts detected the attack and blocked it from reaching consumer devices, dozens of manufacturers had their devices hacked in this way. Nevertheless, in the absence of legal requirements against using default passwords, they continue to do so.

New Types of Ransomware

IoT devices are particularly susceptible to hacking for a more complex reason: They are integrated into the home and corporate networks to a degree unprecedented in traditional systems.

IoT devices typically have a very rapid development process, and during this rush there appears to be no time to think through what such devices actually need access to. As a result, a typical IoT device, or app, will ask for far more privileges than it needs to complete its basic functions.

That’s a huge problem, because it can mean that spyware in the IoT can access far more information than it should be able to.

Let’s take an example. IP cameras typically are sold as IoT devices for smart homes, or for use as webcams. The manufacturer of the device typically will ship it without hardened or updated firmware, and with default passwords (see above). The problem is that if hackers know this default password (and they do, trust me), it is a simple matter to access the feed from the camera.

It gets worse. Using the camera, a hacker can capture sensitive information such as credit card details, passwords, or footage intended for “personal use.” This then can be used to execute a larger hack or to blackmail the victim.

AI and Automation

A more exotic issue with IoT security stems from the fact that IoT networks already are so large and complicated that they are administered via artificial intelligence algorithms rather than by people. For many companies, using AI is the only way to handle the vast amounts of data produced by user devices, and their profitability relies on this functionality.

The issue here is that AIs can make decisions that affect the lives and security of millions of users. Without the necessary staff or expertise to analyze the implications of these decisions, IoT companies can — albeit accidentally — compromise their IoT networks.

Of all the issues on this list, this arguably is the most worrying. That’s because AI-driven IoT systems now handle many critical functions in society, from the time tracking software used to pay employees to the machines that keep patients alive in your local hospital.

The Solutions

The actions of individual companies or individual consumers are not going to solve this problem, however. Instead, there needs to be a paradigm shift in the industry. It’s telling that no (respectable) company would sell, say, time tracking software without committing to keeping it updated. There is no reason this idea is not equally absurd when it comes to physical devices.

Indeed, many of the problems mentioned here — the use of default passwords, or a careless approach to app permissions — were overcome long ago in relation to traditional software. What is required, then, might only be a common-sense approach to locking down IoT devices. The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Sam Bocetta has been an ECT News Network columnist since 2019. A freelance journalist specializing in U.S. diplomacy and national security, Bocetta’s emphases are technology trends in cyberwarfare, cyberdefense and cryptography.

Coronavirus Pandemic: 6 Things We Should Be Doing

Coronavirus Pandemic: 6 Things We Should Be Doing

As I write this, the first studies of the Coronavirus outbreak are coming in. The count now exceeds 17,500 cases in 24 countries with more than 360 deaths, almost all in China.

Most at risk are older males with pre-existing chronic diseases that weaken their immune symptoms. Women appear to have a higher natural resistance to viruses. The main Coronavirus symptoms are cough, headache, muscle pain and fever.

You can follow the progression of the virus on this real-time map. China is being hit hard, and it is now an epidemic there.

Since it has spread across the world, it can be considered a pandemic.

The World Health Organization has declared it a global health emergency.

Airlines have been shutting down routes, at least one cruise ship is under quarantine, and countries are scanning aggressively for the virus at their borders.

I think there are six things we should be doing immediately. I’ll share my suggestions and then close with my product of the week

1. Rule Out Physical Greetings

One of the obvious practices to discourage shaking hands and casual kissing. These are the most likely ways people transfer viruses directly to others, and these anachronistic practices are widely credited with making us sick in general.

The original purpose of handshaking was to compel armed men effectively to disarm themselves in greeting one other. It is just a habit now, and with this virus spreading rapidly, it is a habit that should be discouraged.

Kissing also would spread the virus and is effective at spreading other diseases. I’m not suggesting we give up romance, but we could do without the practice of kissing as a greeting. Particularly when you are talking about a male manager and a female employee, in this #metoo world we live in, it is a practice that should be discouraged for other reasons as well, but if we don’t want to spread the virus then casual contact should be avoided.

Given this would limit the spread of other diseases — we are in flu season, after all — these practices likely should be permanently retired.

One of the reasons to make this formal is so that peer pressure doesn’t push you into doing something unsafe. If you refuse to shake hands or kiss, it can look like you are antisocial, but if it is a general recommendation, you won’t stand alone.

2. Cut Back on Travel

The most likely places to catch and spread a virus are places where people congregate or are held together for long periods. Airplanes, trains and ships are all places where you are in proximity with others for long periods. If a single person presents with symptoms on any of these conveyances, it is likely the entire vehicle will be quarantined, which would have a significant impact on your life and ability to do your job, regardless of whether you get sick or not.

There are few things worse than being sick away from home. Being unable to get home can be so problematic that some will try to conceal their illness so they can travel. If you don’t have to take a trip or use public transportation, avoid it as much as possible, and that should lower your chance of getting the virus.

You also might want to consider this practice during flu season, when an unusually large number of contagious people may be traveling, even in a good year.

Staycations can be fun and far more relaxing than traveling, considering that you may return more tired than you were when you left.

3. Keep Your Hands Away From Your Face

With this virus, chances are you’ll pick it up through something or someone you touched and then become infected by bringing your hands to your face. Maybe your nose itches or there is something in your eye. As I was writing this, I caught myself rubbing my eyes.

At my desk in my home office, there is probably little risk, but our habits move with us, and what might be safe to do at home is far less safe around a lot of people. So, I recommend practicing using your sleeve rather than your hands, and aggressively wash your hands every chance you get.

4. Heads Up

Generally, you can tell at a glance if someone isn’t feeling well, but not if your head is in your smartphone and your mind is someplace else. This recommendation is also good advice for life in general, because not looking where you are going can be more dangerous than any virus if you lose track of your surroundings and step in front of a vehicle or off a ledge.

If you see someone who appears to be ill, maintain your distance, but you can’t avoid someone if you aren’t looking. If you see a lot of sick people and aren’t in a hospital, you need to get out of there before you join the club.

This lesson is something we should be teaching our kids more aggressively. Too many are buried in their phones and oblivious to what is going on around them, which can be problematic for their life expectancy in general.

5. Work From Home

The streets in much of China are nearly empty. So are the mass transit systems and stores. Folks have been asked to shelter in place.

If you aren’t set up to work from home or you lack critical supplies, you’ll be unable to do your job, or you’ll have to risk going out to get supplies. Make sure that if you have to work from home, you can. Keep enough of your medications on hand for several weeks, and enough food for at least a week.

It might be wise for companies to encourage working at home at this time, as that will reduce the number of people coming into the office and the related opportunities to spread the virus throughout the company.

6. Hold Drills

We should have drills that showcase what to do in case the pandemic hits your area. These drills should not be limited to EMTs and other first responders — they should include the general population. Certainly companies should dust off their disaster preparedness plans and make sure they are up to date and capable of dealing with a pandemic.

One product to consider is BlackBerry’s AtHoc platform, which is designed to coordinate disaster response while rapidly and effectively ensuring employee safety. Problems become disasters through a lack of planning, and most firms do not have an up-to-date disaster plan, let alone one that is set up to deal with epidemics or pandemics. Now is the time to fix that.

Wrapping Up: Don’t Panic

One way to deal with a pending disaster is through thorough planning and practice. A major disaster is far less frightening if you already know what to do. Much of the terror is a result of not knowing what to do to protect yourself, your loved ones, and your employees.

Simply making sure you and your people know what to do can mitigate not only the danger but also the fear associated with it. Right now, your chances of getting sick are very slim, but that will change — either with the Coronavirus or some future threat.

Knowing what to do when that happens can influence not only whether you survive, but also how much trauma you experience. You have some control over both, and I’m suggesting you start now to exercise that control.

I get a lot of laptops in, but the machine I live on is a desktop PC, and I generally build my own. This does lead to some interesting support and failure experiences. For instance, my last water-cooled machine didn’t have enough airflow through its radiator, and it overheated, bleeding its blue cooling liquid all over my office floor. It looked like I’d murdered a Smurf (thank heaven for tile floors and the fact Smurfs don’t exist — otherwise I’d be in trouble).

It was surprisingly upsetting to walk into my office that morning to find my PC sitting in what appeared to be a pool of Smurf blood. I’d been mucking around with trying to improve the cooling to avoid such a crime scene when I got in a 32 Core Threadripper Talon system from Falcon-NW and fell in love.

32 Core Threadripper Talon System

32 Core Threadripper Talon System From Falcon-NW

Once again, I was reminded that moving from PC to PC, particularly when you are on Windows 10 and Office 365, has become a ton easier. I did the swap in about 30 minutes, though it took the better part of an hour for the system to patch itself fully and sync with the Microsoft cloud while I did other things.

I can still remember this same process taking days in the past, and being so annoying that I dreaded a new PC. Now I look forward to it again. I should add that Office 365 migrations have improved since my last swap, and just clicking on the Office icon brought me to a download screen, and from then on, the installation was largely automatic.

The Talon with this Threadripper processor comes with an Asetek sealed liquid cooling system, which means I shouldn’t have to worry about my PC bleeding out on me again. It has a 1-TB SSD drive from Corsair (MP600), and one of the new AMD Radeon RX 5700 XT graphics cards.

My typical loading test is to install and run Ashes of the Singularity in Crazy mode, with massive numbers of vehicles, and see how long it takes for the system to overheat or fail. This box took the test with relative ease. I got some frame drops, but generally the game remained playable. While my prior Ryzen 9-based desktop system was powerful, this one blew it off the wall. This 32 Core Threadripper system is AWESOME, and it is my product of the week. The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Rob Enderle has been an ECT News Network columnist since 2003. His areas of interest include AI, autonomous driving, drones, personal technology, emerging technology, regulation, litigation, M&E, and technology in politics. He has an MBA in human resources, marketing and computer science. He is also a certified management accountant. Enderle currently is president and principal analyst of the Enderle Group, a consultancy that serves the technology industry. He formerly served as a senior research fellow at Giga Information Group and Forrester. Email Rob.

Canonical Introduces Scalable Android-Based Cloud Platform

Canonical Introduces Scalable Android-Based Cloud Platform

Canonical is deploying a scalable Android-based operating system for mobile and desktop enterprise applications from the cloud.

The company on Tuesday announced its Anbox Cloud containerized workload platform. Anbox Cloud allows apps to be streamed to any operating system or form factor. Its uses include cloud gaming, enterprise workplace applications, software testing and mobile device virtualization.

“Anbox Cloud is the first commercially available mobile cloud computing platform,” said Galem Kayo, product manager for Ubuntu at Canonical.

“The platform is innovative in that it uses Android as a guest OS for mobile application virtualization in the cloud,” he told LinuxInsider.

Canonical’s cloud technologies underlying Anbox Cloud — LXD, Juju and MaaS (Metal as a Service) — enable scalability and manageability suitable for enterprise use cases, Kayo said.

Canonical services bundled into the Anbox Cloud provide enterprise-grade support like Kernel Livepatch and Extended Security Maintenance, as well as service level agreements, he added.

What Anbox Does

With the rise of 5G and edge computing, enterprises are under pressure to provide a high-performance, high-density compute experience to any and all user devices, according to Canonical. Cloud gaming, enterprise app proliferation, software testing and mobile device virtualization are driving the need for distributed applications from the cloud.

The Anbox Cloud provides the ability to offload compute, storage and energy-intensive applications from x86 and ARM devices to the cloud. It enables end-users to consume advanced workloads by streaming them directly to their devices. Developers can deliver an on-demand application experience through a platform that provides more control over performance and infrastructure costs, with the flexibility to scale based on user demand.

“Driven by emerging 5G networks and edge computing, millions of users will benefit from access to ultra-rich, on-demand Android applications on a platform of their choice,” said Stephan Fabel, director of product at Canonical. “Enterprises are now empowered to deliver high performance, high-density computing to any device remotely, with reduced power consumption and in an economical manner.”

Fast-Growing Demand

The demand for Android in the cloud is growing fast. Demand is driven by enterprises and startup innovators aware of emerging technologies like 5G, edge computing, and system containers that make Android a viable option in the cloud, Kayo explained.

“The ongoing shift in consumer preferences towards richer and ubiquitous mobile experiences also accelerates demand. Geographically, we see a fast growing demand in China, North America and Europe,” he said.

Android is the largest mobile OS (75 percent OS market share, 2.5 billion users, close to 3 million apps). It offers the advantage of a much richer application and developer ecosystem compared to other platforms.

Targeted Adopters

Canonical sees its primary adopters of the Anbox Cloud coming from the ranks of game-streaming companies, enterprises, mobile telco operators and mobile app developers, noted Kayo.

Game-streaming companies will benefit from the high scalability on Anbox Cloud. They are financed mostly through advertising or subscriptions. IAnbox creates an on-demand experience for gamers while providing a protected content distribution channel for game developers.

Game-streaming profit margins can be thin if economies of scale are not reached. Anbox Cloud’s superior scalability allows game-streaming companies to improve their margins.

Enterprises can leverage Anbox Cloud to accelerate digital transformation. Scalability allows the distribution of mobile applications to any number of devices, avoiding capital expenditures for equipment acquisition and operating expenses for maintenance.

Mobile telco operators can leverage Anbox Cloud to create innovative value-added services that improve the return on investment of new infrastructure. The scalability of Anbox Cloud allows them to serve new digital experiences to millions of customers in their mobile networks.

For mobile app developers, Anbox Cloud makes it possible to test applications at scale in the cloud. These tests can be appended to a CI/CD (continuous integration/continuous deployment) pipeline for automatic execution.

Viable Cloud Platform Option

Android is based on Linux, so a Canonical-backed Android cloud platform might be more secure than the Google alternative, suggested Rob Enderle, principal analyst at the Enderle Group.

“There is a fear, particularly after the Sonos accusation of theft, that Google is stealing intellectual property, and a more secure place to develop than the Google Cloud would likely be appreciated,” he told LinuxInsider.

The Anbox announcement is not a huge reach given Android’s roots, but it should allow Canonical to pull a larger audience, particularly from those concerned about Google’s practices, Enderle said.

An Android-based cloud platform is very viable, noted Charles King, principal analyst at Pund-IT.

“Android is and probably will remain the most popular and widely used smartphone OS and platform worldwide. If Canonical can successfully build and differentiate Anbox Cloud, it could do very well,” he told LinuxInsider.

Competitive Potential

Google offers a host of Android developer tools that work in conjunction with services for hosting and supporting apps on its Google Cloud Platform, noted King. “Given the company’s growing focus on business and enterprise services, that’s the 800-pound gorilla that Canonical will have to deal with.”

Canonical, parent company of Ubuntu Linux, seems to be reaching beyond its Linux OS offerings to expand or create a new chapter in cloud computing, he observed. “This looks like a smart move on Canonical’s part. The company has established itself pretty well but needs to look beyond its current solution set for future growth.”

Canonical’s business customer base should offer it a solid set of prospects for Anbox Cloud. Getting those organizations on board will be critically important, King added.

“With Anbox Cloud, Canonical is bringing to market a disruptive product that is both powerful and easy to consume,” said Jacob Smith, CMO at Packet. “As small, low-powered devices inundate our world, offloading applications to nearby cloud servers opens up a huge number of opportunities for efficiency, as well as new experiences.”

Multiple Uses

Enterprises can reduce their internal application development costs through a single application used across different form factors and operating systems. Developers also can utilize Anbox Cloud as part of their application development process to emulate thousands of Android devices across different test scenarios and for integration in CI/CD pipelines.

Adopters can host the Anbox Cloud platform in the public cloud for infinite capacity, high reliability and elasticity. They can host it on a private cloud edge infrastructure, where low latency and data privacy are a priority.

Public and private cloud service providers can integrate Anbox Cloud into their offering to enable the delivery of mobile applications in a Platform as a Service or Software as a Service model. Telecommunication providers can create innovative value-added services based on virtualized mobile devices for their 4G, LTE and 5G mobile network customers.

How It Works

Anbox Cloud is built on a range of Canonical technologies and runs Android on the Ubuntu 18.04 LTS kernel. Containerization is provided by secure and isolated LXD system containers.

Android runs in system containers in the cloud. A Web or native client wrapped into a mobile or desktop application sends sensor input to the container.

Graphical output is streamed back to the client via webrtc. Direct access through a Web browser makes it possible to deliver Android applications to any device that can run a browser.

LXD containers are lightweight, resulting in at least twice the container density compared to Android emulation in virtual machines, depending on streaming quality or workload complexity. A higher container density drives scalability up and unit economics down.

MaaS cloud-style provisioning for physical servers is utilized for remote infrastructures. Juju, an automatic service orchestration project launched by Canonical, provides automation tooling for easy deployment, management and reduced operational costs.

Boosts Cloud Gaming

The Anbox Cloud taps into the growing cloud gaming segment. The new cloud platform enables graphics- and memory-intensive mobile games to be scaled to vasts numbers of users while retaining the responsiveness and ultra-low latency demanded by gamers.

It removes the need to download a game locally on a device. It creates an on-demand experience for gamers while providing a protected content distribution channel for game developers.

Anbox Cloud enables enterprises to accelerate their digital transformation initiatives by delivering workplace applications directly to employees’ devices. It ensures data privacy and compliance.

Support and Pricing

The Ubuntu Advantage support program included with Anbox Cloud provides continuous support and security updates for up to 10 years.

Canonical partners with Packet, a cloud computing infrastructure provider, to deploy Anbox Cloud on-premises or at target edge locations around the world.

Anbox Cloud is available for self-hosted deployments in public or private clouds. The product is priced yearly on a per node basis, per compute instance per year, with support included and SLAs guaranteed.

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software. Email Jack.

Know Your Enemy: The Difficulty of Defining Deepfakes

Know Your Enemy: The Difficulty of Defining Deepfakes

Facebook recently promised that it would increase efforts to remove so-called “deepfake” videos, including content that included “misleading manipulated media.”

In addition to fears that deepfakes — altered videos that appear to be authentic — could impact the upcoming 2020 general election in the United States, there are growing concerns that they could ruin reputations and impact businesses.

A manipulated video that looks real could convince viewers to believe that the subjects in the video said things they didn’t say, or did things they didn’t do.

Deepfakes have become more sophisticated and easier to produce, thanks to artificial intelligence and machine learning, which can be applied to existing videos quickly and easily, achieving results that took professional special effects teams and digital artists hours or days to achieve in the past.

“Deepfake technology is being weaponized for political misinformation and cybercrime,” said Robert Prigge, CEO of Jumio.

In one high profile case, criminals last year used AI-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of US$243,000, Prigge told TechNewsWorld.

“Unfortunately, deepfakes can also be used to bypass many biometric-based identity verification systems, which have rapidly grown in popularity in response to impersonation attacks, identity theft and social engineering,” he added.

Facing Off Against Deepfakes

Given the potential for damage, both Facebook and Twitter have banned such content. However, it’s not clear what the bans cover. For its part Facebook will utilize third-party fact-checkers, reportedly including more than 50 partners working worldwide in more than 40 languages.

“They’re banning videos created through machine learning that are intended to be deceptive,” explained Paul Bischoff, privacy advocate at Comparitech.

“The videos must have both audio and video that the average person wouldn’t reasonably assume is fake,” he told TechNewsWorld.

“A deepfake superimposes existing video footage of a face onto a source head and body using advanced neural network-powered AI to create increasingly realistic doctored videos,” noted Prigge. “In other words, a deepfake looks to be a real person’s recorded face and voice, but the words they appear to be speaking were never really uttered by them.”

Defining Deepfakes

One troubling issue with deepfakes is simply determining what is a deepfake and what is just edited video. In many cases deepfakes are built by utilizing the latest technology to edit or manipulate video. News outlets regularly edit interviews, press conferences and other events when crafting news stories, as way to highlight certain elements and get juicy sound bites.

Of course, there have been plenty of criticisms of mainstream news media for manipulating video footage to change the context without AI or machine learning, simply using the tools of the editing suite.

Deepfakes generally are viewed as far more dangerous because it isn’t just context that is altered.

“At its heart, a deepfake is when someone uses sophisticated technology — artificial intelligence — to blend multiple images or audio together in order to change its original meaning and convey something that is not true or valid,” said Chris Olson, CEO of The Media Trust.

“From manipulating audio to creating misleading images, deepfakes foster the spread of disinformation as the end user typically doesn’t know that the content or message is not real,” he told TechNewsWorld.

“To varying degrees, social platforms have issued policies prohibiting the posting of highly manipulated videos that are not clearly labeled or readily apparent to consumers as fake,” added Olson.

Still, “while these policies are a step in the right direction, they do not explicitly ban manipulated video or audio,” he pointed out. “Having your account blocked isn’t much of a deterrent.”

Manipulation Without Malice

Facebook’s ban and other efforts to ban or otherwise curb deepfakes do not apply to political speech or parodies.

Consent may be another issue that needs to be addressed.

“This is a great point — fake videos and images can be defined broadly — for example, anything that is manipulated,” said Shuman Ghosemajumder, CTO of Shape Security and former fraud czar at Google.

“But most media created is, to some extent, manipulated,” he told TechNewsWorld.

Manipulations include automatic digital enhancements to photos taken using modern cameras — those equipped with HDR settings or other AI-based enhancement — as well as filters, and aesthetic editing and retouching, noted Ghosemajumder.

“If most media is thus automatically marked on a platform as ‘synthetic’ or ‘manipulated,’ this will reduce the benefit of such a tag,” he remarked.

The next step will be to figure out objective criteria to exclude that type of editing and focus on “maliciously manipulated” media, which could be an inherently subjective standard.

However, “it can’t be a question of individuals consenting to be in videos, because no such consent is generally required of public figures or of videos and images that are taken in public places,” observed Ghosemajumder, “and public figures are the ones that are most likely to be targeted by malicious users of these technologies.”

AI Tools Singled Out

Facebook’s deepfakes ban singles out videos that use AI technology or machine learning to manipulate the content.

“This is an incomplete approach, since most fake content, including misleading videos posted today, are not created with such technology,” said Ghosemajumder.

The now famous Nancy Pelosi video “could have been created with technology from 40-plus years ago, since it was just simple video editing,” he added.

More importantly, “maliciousness cannot be defined based only on the technology used,” said Ghosemajumder, “since much of the same technology used to create a malicious deepfake is already being used to create legitimate works of art, such as the de-aging technology used in The Irishman.”

Viewer Perception

As the Facebook policy stands, satire and parody would be exempt, but what falls into those categories isn’t always clear. Viewer reactions don’t always align with what the content maker may have had in mind. A joke video that falls flat might not be viewed as satire.

“The standards for judging satire or fan films are also subjective — it may be possible to determine what is or is not intended as satire in a court of law to society’s satisfaction in an individual instance, but it is much more difficult to make such determinations automatically for millions of pieces of content in a social media platform,” warned Ghosemajumder.

In addition, even in cases when a video is created with obviously satirical intent, that intent can get lost if the video is shortened, taken out of context, or even just reposted by someone who didn’t understand the original intent.

“There are many examples of satirical content fooling people who didn’t understand the humor,” said Ghosemajumder.

“It’s more about how the audience perceives it. Satire doesn’t fall into the ban. Nor does parody, and if the video is clearly labeled as fiction, it should be fine,” countered Bischoff.

“It is our understanding that Facebook and Twitter are not banning satire or parody — intention is the key differentiator,” added Alexey Khitrov, CEO of ID R&D.

“Satire by definition is the use of humor, exaggeration or irony, whereas the intention of a deepfake is to pass off altered or synthetic video or speech as authentic,” he told TechNewsWorld. “Deepfakes are used to trick a viewer and spread misinformation. While a deepfake aims to deceive the average user, satire is apparent.”

Legal Efforts

There have been legal efforts to stop the proliferation of deepfakes, but the government might not be the best entity to tackle this high-tech problem.

“Over the past two years, several U.S. states introduced legislation to govern deepfakes, with the Malicious Deep Fake Prohibition Act and DEEPFAKES Accountability Act introduced to the U.S. Senate and House of Representatives respectively,” said The Media Trust’s Olson.

Both bills stalled with lawmakers, and neither proposed much change beyond introducing penalties. Even if laws were passed, it is unlikely a legislative approach can keep up with technological advancements.

“It’s very difficult to effectively legislate against a moving target like emerging technology,” warned Olson.

“Until there is perfect recognition that content is a deepfake, platforms and media outlets need to disclose to consumers the source of the content,” he suggested.

“Deepfake videos cannot be stopped, just like photoshopped photos cannot be stopped,” said Josh Bohls, CEO of Inkscreen.

“Libel laws can be expanded to include altered videos that might misrepresent a public figure in a harmful way, providing the subject some kind of recourse,” he told TechNewsWorld. “It would also be prudent to pass laws requiring the labeling of certain categories of videos — political ads, for example — so that the viewer is aware that the content has been altered.”

Tech to Fight Tech

Instead of the government creating new laws, the tech industry could solve the deepfakes problem, even if defining them remains fuzzy. Providing access to technology to determine if a video has been manipulated could be a good first step.

“Several social platforms have taken steps to detect and remove deepfake videos with limited success as detection lags behind the speed at which new technology emerges to create better, more realistic deepfakes in an ever-diminishing period of time,” said Olson.

“The challenge remains on the difficult process of identifying and removing deepfakes before they spread to the general public,” he said.

Social media is where these videos are spreading and where removal is crucial. These platforms are in a good position to roll out new technology.

“Overall, Twitter and Facebook announcing plans to take action against malicious fake content is an excellent first step, and will increase scrutiny and skepticism of media uploaded to the Internet, especially by anonymous or unknown sources,” noted Ghosemajumder.

However, this is absolutely not a ‘silver bullet’ solution to this problem, for many reasons warned Ghosemajumder.

“The detection of fake media is a cat-and-mouse game. If manipulated content is immediately flagged, then malicious actors will experiment against the system with variations of their content until they can pass undetected,” he explained.

“On the other hand, if manipulated content is not immediately flagged or removed, it can be spread — and will quickly morph — causing damage in whatever period exists between creation and uploading and flagging, in a way that may not be possible to easily contain at that point,” Ghosemajumder suggested.

“Finally, the use of fake accounts and automated fraud and abuse is the primary mechanism that malicious actors use to spread disinformation,” he said. “This is one of the key areas social networks need to address with the most sophisticated technology available, not just home-built solutions.”

Anti-Deepfake Tools

Several companies are exploring methods of combating deepfakes. Facebook, Microsoft and AWS launched the Deepfake Detection Challenge to encourage development of open source detection tools. [*Correction – Jan. 22, 2020]

“Without consistently flagging suspect digital content and labeling the source of the doctored video or audio, technology will have little impact on the issue,” said Olson.

“Providing this context will help the consumer better understand the veracity of the message. Was it sent to me from an unknown third party, or did I find it on a brand website during product research? This attribution information is what’s needed to counter deepfakes,” he said.

However, with a lot of manipulated content, it is often about misdirection — and in this case, too much focus on the video itself could be problematic.

“It’s not just the visual stream that’s vulnerable to deepfakes, but also the voice stream. Both can be altered or manipulated as part of a deepfake,” said ID R&D’s Khitrov.

There is technology to help detect a deepfake’s manipulated audio, he noted.

“Liveness detection capabilities can identify artifacts that aren’t audible to the human ear but are present in synthesized, recorded or computer-altered voice,” explained Khitrov. “We can detect over 99 percent of audio deepfakes, but only where that technology is deployed.”

The Deep View

Those with a pessimistic take believe the technology to create convincing deepfakes simply will outpace the technology to stop it.

“Our analogy is that there are viruses and there is antivirus technology, and staying ahead of the bad guys requires constant iteration, and the same holds true for deepfake detection,” said Khitrov.

However, “the AI-based technologies that the bad guys are using are very similar to the technologies the good guys are using. So the breakthroughs that are available to the bad guys are also being used by the good guys to stop them,” he added.

A bigger threat is that “deepfake software is already freely available on the Web, although it’s not that great yet,” said Comparitech’s Bischoff. “We will have to learn to be vigilant and skeptical.”

*ECT News Network editor’s note – Jan. 22, 2020: Our original published version of this story incorrectly stated that the Deepfake Detection Challenge was a project launched by The Media Trust. In fact, it is a joint effort of Facebook, Microsoft and AWS. We regret the error.

Peter Suciu has been an ECT News Network reporter since 2012. His areas of focus include cybersecurity, mobile phones, displays, streaming media, pay TV and autonomous vehicles. He has written and edited for numerous publications and websites, including Newsweek, Wired and Email Peter.

Cosmos: Possible Worlds – A Lavish, Hopeful Journey

Cosmos: Possible Worlds – A Lavish, Hopeful Journey

The State of Customer Service Automation Report
The first and only report of its kind that looks at the impact of customer service bots on standard contact center KPIs such as CSAT, TTR and TTFR. We’ve analyzed 71 million bot interactions and the findings are astonishing. Download this report to learn how brands are benefiting from automation at lightning speed.

What if your favorite blockbuster science fiction movie were true?

When we watch a work of fiction, we enter an understanding with its makers. We agree to suspend our disbelief, and they agree to entertain us. If it’s a particularly good work, perhaps it inspires us by communicating some kernel of wisdom about human nature. Its story may touch our emotions deeply, but we know it’s the product of imagination and artistry. It’s not bound by facts.

Cosmos: Possible Worlds delivers everything a great science fiction movie delivers, except the fiction. It uses the same tools and tactics — powerful visuals, soaring music and, most importantly, wonderful stories. Mined by hunters and gatherers who excel at finding treasures with deep resonance, those stories are told by a gifted communicator. And they’re true.

Although Cosmos adheres rigorously to what is objectively true, it is in no way limited by the facts — it’s the facts that make it so beautiful.

Voyage of the Imagination

Cosmos: Possible Worlds is the third season in the Cosmos series, which creator, executive producer, director and writer Ann Druyan began 40 years ago with her late husband, Carl Sagan.

Joining Druyan this time around are executive producers Seth MacFarlane, Brannon Braga and Jason Clark. Astrophysicist Neil deGrasse Tyson is the host. The 13-episode season kicks off on National Geographic on March 9.

The series takes viewers on a voyage through time and space, starting with the Big Bang and crossing the veil of the present to imagine far-distant worlds in a far-distant future. It’s fueled by VFX, animation, dramatic re-enactments, and a wealth of scientific information that seems to float effortlessly from the screen to the viewer’s mind and heart.

neil degrasse tyson big bang

Host Neil deGrasse Tyson is silhouetted against the birth of the cosmos — the Big Bang — at the inception of the Cosmic Calendar and its vast 13.8 billion years of cosmic evolution.

Its aim is not only to relate scientific truths, but also to encourage us to imagine future possibilities. Cosmos pulls back the curtain to reveal some of the wonders that may lie ahead.

When the show veers into speculation, it lets us know. That speculation is not freewheeling, however. It’s underpinned very carefully with scientific knowledge, and vetted by experts in their fields.

Aspirational at Heart

This season of Cosmos offers a vision of a hopeful future — “not the dystopian ruined future that we see so many times,” Druyan said last week during a panel discussion at the Television Critics Association 2020 Winter Press Tour in Pasadena.

It’s “a science-based dream,” she added.

It does that in part by encouraging self-reflection, according to Druyan.

ann druyan on cosmos set

Executive Producer Ann Druyan walks through a scene with Elizardo Torrez, who plays a young Carl Sagan, on set in his family’s apartment.

Though Cosmos: Possible Worlds is passionately truthful, it is also storytelling at its core, and it touches viewers’ emotions in much the same way as other great works of art often do.

“When you watch it you’re not thinking ‘documentary,’ you’re thinking something else — I don’t know if there’s a word,” remarked Tyson, also on the panel.

It shows us a future we hope for, he added.

Call to Action

One of the messages of Cosmos: Possible Worlds seems to be that our distant descendents won’t have a future unless we look unflinchingly at the state of our planet today and recognize humanity’s responsibility to preserve it — and that requires having the facts straight.

Cosmos is not at all ponderous or preachy, however. One of the hallmarks of the show, going back to the first season with Carl Sagan, is to present science in a way “that doesn’t make you feel bad for not having known it in advance,” Tyson said.

There’s a sense of comfort in “learning for yourself what is true, and then taking advantage of that newfound power of knowledge and insight to become a better shepherd of your civilization. I think that’s achieved in this season of Cosmos as never before,” he added.

How can Cosmos prevail against science deniers? For example, 30 years ago few people would be inclined to embrace the notion that Earth might be flat, an audience member said, but now there are international societies promoting the belief.

What Cosmos aspires to do is “to empower every single one of us. We are telling a great story, and it’s the closest thing to the truth that we very-flawed humans ever get a hold of. I think there’s no better way to combat the failure of the imagination that makes someone believe in a flat Earth than to inspire them, and engage them, and bring them in,” Druyan responded.

“It matters what’s true,” she said. “We have to communicate that value to everyone.”

Mick Brady is managing editor of ECT News Network.