Tuesday, December 31, 2019

Looking ahead to 2020, Rackspace CTO Joel Friedman offers his predictions

As the New Year 2020 begins, hybrid and multi-cloud will continue to gain traction, as will software as a service (SaaS). Security remains a tough nut to crack, and edge computing will gain ground, but Joel Friedman, Rackspace CTO, predicts more modestly than the current hype may suggest.


PREDICTION 1: HYBRID/MULTI-CLOUD REMAINS ASCENDANT
Smart organizations have locked onto the benefits of running apps and data in the places that make the most sense, whether that’s public or private cloud, colocation or, most commonly, a bespoke combination of these that can evolve as business needs and resources dictate. However, the complexity of managing multiple clouds across the dimensions of cost, security, governance, identity and DevOps patterns will continue to grow in 2020.


This complexity is related to the forced reliance upon a multitude of platforms, the rate of technological change, disruptions to traditional service delivery models and a real shortage of skill sets required to bring it all together. Due to these factors, it is likely that multi-cloud solutions will prevail, while the search for qualified external help will intensify.

Finding external help may prove difficult, as not only will managed service providers need to maintain a multi- or hybrid cloud instance, but internal and external vendors will need to work together to manage a multitude of issues around cost governance, security and tagging, to name a few. Fortunately, 451 Research has found these growing pains can be alleviated with clear communication and flexibility on each partner’s part.

PREDICTION 2: SAAS = PROBLEM SOLVED
On top of infrastructure, one of the cloud strategies I hear on repeat when working with customers is that “cloud first” starts with SaaS. They don’t want to reinvent the (inferior) wheel by developing applications for ‘solved’ problems. Mature SaaS offerings are a culmination of many iterations of customer mandated features, user experience analysis, and just as important, industry best practices around process workflows.

In many cases, organizations choosing to fit their internal process to best match their SaaS vendor’s implementation – as opposed to always opting for customizing the platform to suit legacy workflows. Furthermore, SaaS has a compounding effect with data; once data is in the system, other ecosystem players can offer turn-key integration and add-on services, further enhancing the value proposition of SaaS. 

While this isn’t necessarily a new trend, Friedman predicts this will be the norm in 2020 and expects to see more and more niche, and vertically-focused applications developing a SaaS model.


PREDICTION 3: SECURITY REMAINS A STRUGGLE
When it comes to security, many organizations remain burdened with legacy systems riddled with technical debt and corresponding security vulnerabilities. They understand the need to harness cloud-compatible security frameworks, operating models and tools to operate securely in cloud-native environments, but it can be slow and rough going.

Most industries simply have not reached the place where mature cloud security practices are being implemented, and this creates fertile ground for breaches. Too many are still attempting to apply traditional security controls and methodologies to cloud-native environments and deployments. 

This generally does not work well; not only is the security implementation likely to be ill fit, it also creates drag on the organization’s desired benefits of agility. This sometimes leads to fragmentation, where business units go around central security and implement on their own shadow security. The risks of this should be obvious.

In 2020, Friedman thinks that the cloud continues to present some growing pains to even the most digital-native and forward-thinking organizations. These organizations understand how the cloud can aid their business in moving fast, but they don’t yet have the maturity to implement real-time or just-in-time posture management and ‘least privilege’ protocols in the ephemeral, complex and constantly changing world of multi-cloud. 

While security teams straddle the old world of insecure legacy applications and platforms (those which cannot or should not be modernized), and the relative newness of cloud operating models, I unfortunately expect to see the breach headlines continue in 2020 along with all of the free credit monitoring can handle.

Even when the security teams agree in concept that it’s possible to be more secure in the cloud, many do not have cloud compatible security frameworks, operating models and tools to aid the business in operating in cloud-native environments securely.


PREDICTION 4: CLOUD CONTROL PLANE WILL BECOME THE NORM
There has been a trend over the past few years in which the management plane has been reversing from the datacenter to the cloud. For early cloud adopters, the datacenter was still the central control point. Organizations burst into the cloud or had back-end projects with no internet accessibility. As cloud grew in popularity, more and more greenfield projects became cloud-native.

Granted, many still integrate with on-premises or hosted private clouds for identity, business intelligence or other data enrichment requirements, but the hyperscalers have now moved past denying that hybrid cloud is real (in attempts to capture all workloads), and pivoted their strategy to capture those datacenter workloads where they stand and bring them into their ecosystem. 

This includes Snowball Edge, AWS RDS of VMware, Azure Stack/Azure Arc and Google Anthos. Expect to see more such services in 2020. And why shouldn’t we? Cloud providers have proven their effectiveness of securely operating at scale, and API-enabling everything.


PREDICTION 5: EDGE WILL GAIN MODERATE GROUND
Edge computing is a new frontier — no player is currently dominating this space, as it is a naturally fragmented market. When choosing locations over platforms to address local markets, data sovereignty, and other use case specific needs, I believe organizations may want to align in a technology-neutral and consistent manner. And based on the trends over the past year, containers and serverless seem like a natural beneficiary for edge.

As Kubernetes dominates in the container orchestration arena, serverless is another story in which, as of yet, there are no victors. AWS, Azure and Google Cloud Platform all have their own flavors of Function-as-a-Service (FaaS), which are embedded within their respective cloud ecosystems. Will OpenFaaS with kNative gain some ground based on open standards, addressing the often spoken lock-in avoidance and mobility potential? We shall see. 

Either way, Friedman predicts that 2020 will see lots of innovation and exploration in the edge space, but he suspects this is the year of gaining momentum and the floodgates won’t open until 2021.

Kaspersky Web Traffic Security now available in two deployment options to meet customer demands

Kaspersky released next version of Kaspersky Web Traffic Security, which now offers enhanced protection capabilities though integration with Kaspersky Anti Targeted Attack to improve early detection of sophisticated web threats. The product is now available in two deployment options, as a standalone application and a software appliance, to meet broader customer needs.


According to Kaspersky researchers, 717 million web attacks were revealed in the second quarter of this year. The attacks contain various kinds of malware including generic adware to ransomware and advanced threats that reach corporate networks through phishing, social engineering or unreliable web resource surfing.

Web gateway protection allows companies to block massive amounts of web threats before they reach endpoints. This decreases the number of alerts on endpoints that interrupt users and administrators, as well as ensures protection of devices which don’t have endpoint security product installed or updated.


To make the deployment and use of the product effective for companies with different needs, Kaspersky now offers two options: as a software appliance or a standalone application.

The software appliance is a ready-to-use solution for companies that need to quickly deploy and start using secure web gateway with a proxy server pre-configured. The appliance interface allows users to manage the incorporated proxy server, avoiding configuration hassle.

The Kaspersky Web Traffic Security standalone application allows resource economy and a more agile configuration for companies that need a more customized solution and careful integration of different cybersecurity products. The application does not necessarily demand a separate server, as the system requirements necessary to protect a particular bandwidth are met. It can be installed alongside other applications but configured separately from other gateway components.


The protection capabilities of Kaspersky Web Traffic Security are now empowered by two-way API-based integration with Kaspersky Anti Targeted Attack, which allows customers of both solutions to achieve earlier attack detection and automated responses to advanced web threats. 

Suspicious files are automatically sent to Kaspersky Anti Targeted Attack for analysis. The system reveals the nature and malicious activity that the advanced threat generates, including files, transmission of commands, payloads and stolen data, and blocks them at the early attack stage.


“Different deployment scenarios allow companies to choose the best way to secure corporate web traffic, depending on available resources or IT network architecture,” said Sergey Martsynkyan, head of B2B product marketing at Kaspersky. “It also brings new usage scenarios to our partners. For example, the new format of the all-in-one appliance can be used by managed service providers to add web traffic security to their portfolio. Thanks to the application’s ease of deployment, scalability and multi-tenancy, they can provide web traffic security as a service for additional protection to ever growing number of customers, without the hassle.”

Kaspersky Web Traffic Security is available in both deployment options as part of Kaspersky Security for Internet Gateway and Kaspersky Total Security for Business.

VMware completes acquisition of Pivotal, connects infrastructure and application owners to boost software delivery, business outcomes

VMware announced Monday that it has completed the acquisition of Pivotal Software. With the completion of the acquisition, Pivotal’s Class A common stock was removed from listing on the New York Stock Exchange with trading suspended prior to the open of the market today, and Pivotal will now operate as a wholly owned subsidiary of VMware. The transaction represented an enterprise value for Pivotal of approximately $2.7 billion.


Under the terms of the transaction, Pivotal’s Class A common stockholders are entitled to receive $15.00 per share cash for each share held (without interest and less applicable tax withholdings), and Pivotal’s Class B common stockholder, Dell Technologies, received approximately 7.2 million shares of VMware Class B common stock, at an exchange ratio of 0.0550 shares of VMware Class B common stock for each share of Pivotal Class B common stock.

Pivotal’s offerings will be core to the VMware Tanzu portfolio of products and services designed to help customers transform the way they build, run and manage their most important applications, with Kubernetes as the common infrastructure substrate. 


The combination of Pivotal’s developer-centric offerings with VMware’s upstream Kubernetes run-time infrastructure and management tools will deliver a comprehensive enterprise solution that enables dramatic improvements in developer productivity in the creation of modern applications. VMware is able to offer product building blocks and integrated solutions that are tested and proven with technical expertise that customers need to accelerate software delivery across data center, cloud and edge environments.

“It's my pleasure to announce Ray O'Farrell as the leader of VMware’s new Modern Applications Platform business unit—uniting the Pivotal and VMware Cloud Native Applications teams,” said Pat Gelsinger, CEO, VMware. “And as Pivotal is now part of VMware, I want to thank the Pivotal leadership team for building a great company. Together, we’re poised to be the leading enabler of Kubernetes with a deep understanding of both operators and developers.”

“Digital transformation and the applications that drive it should not be restricted only to cloud and software giants,” said Ray O’Farrell, executive vice president and general manager, Modern Applications Platform Business Unit, VMware. “We believe that modern application development solutions and practices need to be easily accessible to everyday enterprises across the globe. With Pivotal’s developer capabilities as the foundation, we’ll focus on delivering consumable, enterprise-ready cloud native offerings to customers to help them achieve better business outcomes.”  


“Pivotal has fundamentally changed how the world’s biggest brands build and manage software with a focus on developer productivity through platform abstractions and development techniques as well as connecting the business with the developer,” said Edward Hieatt, senior vice president, customer success, Pivotal. “The combination of Pivotal and VMware offers the most comprehensive application platform in the industry and is a win for our customers, a win for Pivotal, and a win for VMware. We’re excited to team up with VMware to help more enterprises become like modern software companies by adopting DevOps and Lean techniques developed by internet giants and the startup community.”


Numerous mutual customers including Raytheon have reacted positively to the news of the acquisition. “By working with both Pivotal and VMware, we’ve been able to completely transform how we write software for our military and government customers,” said Todd Probert, vice president for C2, Space and Intelligence at Raytheon. “Combining these companies under a single umbrella is going to make it possible for my team to get code to our customers even faster and easier.”

NetApp predicts data management and IT trends will lead in the New Year

2019 was a year of rapid innovation—and disruption—for both the IT industry and the broader business community. With the widespread adoption of hybrid multicloud as the de facto architecture for enterprise customers, organizations everywhere are under tremendous pressure to modernize their infrastructure and to deliver tangible business value around data-intensive applications and workloads.

As a result, organizations are shifting from on-premises environments to using public cloud services, building private clouds, and moving from disk to flash in data centers—sometimes concurrently. These transformations open the door to enormous potential, but they also introduce the unintended consequence of increased IT complexity.


NetApp predicts that a demand for simplicity and customizability will be the number one factor that drives IT purchasing decisions in 2020. Vendors will need to offer modern, flexible technologies with the choice of how to use and to consume those technologies so that customers can keep pace with their evolving business models. As IT departments strive to deemphasize maintenance and hardware, to reduce overhead, and to adopt pay-as-you-go models, simplicity and choice will be crucial.

Achieving this simplicity will serve as the foundation for companies as they navigate the exciting technological trends that we identify in the following sections.

1. As the advent of 5G makes AI-driven Internet of Things (IoT) a reality, edge computing environments are primed to become even more disruptive than cloud was.


In preparation for the widespread emergence of 5G, lower-cost sensors and maturing AI applications will be used to build compute-intensive edge environments. This effort will lay the groundwork for high-bandwidth, low-latency AI-driven IoT environments with the potential for huge innovation—and disruption.

The advent of 5G is what AI-driven IoT has been waiting for. It will take a few more years for 5G data technology to spread across the entire United States. However, 2020 will see many players in the technology industry and business community invest in building edge computing environments to support the reality of AI-driven IoT. These environments will make possible new use cases that rely on intelligent, instantaneous, and autonomous decision-making, with low-latency, high-bandwidth capabilities. This evolution will bring us to a world where the internet will work on our behalf—without even having to ask.


This AI-driven IoT innovation, however, will depend on a massive prioritization of edge computing, further disrupting IT infrastructures and data management priorities. As edge devices move beyond home devices (such as connected thermostats and speakers) and become more far-reaching (such as connected solar farms), more data centers will be placed at the edge. Also, platforms such as artificial intelligence for IT operations (AIOps) will be necessary to help monitor complex environments across the edge, the core, and the cloud.

2. The impact of blockchain will be undeniable as indelible ledgers rapidly enable game-changing use cases outside of cryptocurrency.


The world is quickly moving beyond Bitcoin to adopt enterprise-distributed indelible ledgers, setting the stage for a transformation that’s exponentially bigger than the impact that cryptocurrency has had on blockchain in finance. 

While the crypto frenzy continues to steal the limelight when it comes to blockchain, most players in the industry understand the bigger picture of the technology and its potential. Going into 2020, we will see a tipping point for larger implementations as enterprises go a step further to adopt indelible ledgers based on Hyperledger, which represents the maturation of blockchain for wider use cases. Indeed, we will start to see blockchain go “mainstream” as it enables industries such as healthcare to create universal patient records, to improve chain-of-custody pharmaceutical processes, and more.

With such use cases validating blockchain and indelible ledgers, additional widespread adoption of the technology will drive transformation across society on a larger scale. This widespread adoption will build on the disruption that cryptocurrency has brought to finance to touch nearly every industry. As a result, new data management and compute capabilities will encourage companies to invest in indelible ledgers to build differentiated applications and to collaborate on critical, sensitive datasets.

3. Hardware-based composable architecture will have less short-term potential against commodity hardware and software-based infrastructure virtualization.


Continued improvements in commodity hardware performance, software-based virtualization, and microservice software architectures will eliminate much of the performance advantage of proprietary hardware-based composable architectures, relegating them to niche data center roles soon. 

Hardware-based composable architecture is being hyped as the next evolution of hyperconverged infrastructure (HCI). This architecture enables CPUs, networking cards, workload accelerators, and storage resources to be distributed across a rack-scale architecture and to be connected with low-latency PCIe-based switching. 

Although composable architecture does have potential, standardization has been slow, and adoption has been even slower. Meanwhile, software-based virtualization of storage, combined with software-based (but hardware-accelerated) compute and networking virtualization solutions, offers much of the flexibility of hardware-based composable architectures today with lower cost and consistently increasing performance.

Next year, attempts to build a true hardware-based rack-scale computing model will no doubt continue, and the space will continue to evolve quickly. However, most organizations that must transform within 2020 will be best served by a combination of modern HCI architectures (including disaggregated HCI) and software-based virtualization and containerization.

CloudFlare reveals that this year’s holiday's biggest online shopping day was... Black Friday

CloudFlare revealed that the biggest day of the holiday season for holiday shopping this year was Black Friday, the day after the US Thanksgiving holiday, which has been embraced globally as the day retail stores announce their sales. 

But it was believed that the following Monday, dubbed “Cyber Monday,” may be even bigger. Or, with the explosion of reliable two-day and even one-day shipping, maybe another day closer to Christmas has taken the crown. Cloudflare aimed to answer this question for the 2019 holiday shopping season.

Black Friday was the biggest online shopping day but the second biggest wasn't Cyber Monday... it was Thanksgiving Day itself (the day before Black Friday!), with Cyber Monday taking the fourth position. 


Data shows weekends in yellow and Black Friday and Cyber Monday are shown in green. Viewers can see that checkouts ramped up during Thanksgiving week and then continued through the weekend into Cyber Monday.

Black Friday had twice the number of checkouts as the preceding Friday and the entire Thanksgiving week dominates. Post-Cyber Monday, no day reached 50 percent of the number of checkouts witnessed on Black Friday, and Cyber Monday was just 60 percent of Black Friday.


So, Black Friday is the peak day but Thanksgiving Day is the runner up. Perhaps it deserves its own moniker: Thrifty Thursday anyone?

Checkouts occur more frequently from Monday to Friday and then drop off over the weekend. After Cyber Monday only one other day showed an interesting peak. Looking at last week it does appear that Tuesday, Dec. 17 was the pre-Christmas peak for online checkouts. Perhaps fast online shipping made consumers feel they could use online shopping as long as they got their purchases by the weekend before Christmas.

Saturday, December 28, 2019

Trend Micro envisages a new era as cybercriminals use machine learning to create advanced threats

Cybersecurity companies use machine learning technology to enhance threat detection capabilities that help fortify organizations’ defense against malware, exploit kits, phishing emails, and even previously unknown threats, Trend Micro revealed. 

The Capgemini Research Institute conducted a study on the usage of machine learning for security and found that of 850 senior executive respondents based in 10 countries, around 20 percent started using the technology before 2019, and about 60 percent will be using it by year’s end.


The use of machine learning in cybersecurity — not to mention in many other fields across various industries — has proven to be beneficial. This technology, however, is also at risk of being used by threat actors. While widespread machine learning weaponization may still be far off, research concerning this area, particularly the use of deepfake technology to extort and misinform, have recently become a topic of interest for the IT community and the general public.

To get a clearer picture of what is possible and what is already a reality with regard to machine learning-powered cyberthreats, research on machine learning-powered malware is still surprisingly scarce, considering that some experts have long considered it as a type of threat that can possess advanced capabilities. In fact, only one PoC of such a threat has been publicized, which was unveiled at Black Hat USA 2018. 


IBM presented a variant named DeepLocker that can deploy untraceable malicious applications within a benign data payload. The malware variant is supported by deep neural networks (DNN) or deep learning, a form of machine learning. The use of DNN disguises the malware’s conditions, which are pieces of information that security solutions need to detect malicious payload.

DeepLocker is designed to hide until it detects a specific victim. In the demonstration, DeepLocker was seen stealthily waiting for a specific action that will trigger its ransomware payload. The action that triggered the payload was the body movement of a targeted victim when he/she directly looked at a laptop webcam, which is operated by a webcam application embedded with malicious code. The application of machine learning in this attack can be considered limited, but it showed how malware variants can be highly evasive and targeted when infused with machine learning.


Experts are increasingly warning the public about deepfake videos, which are fake or altered clips that contain hyperrealistic images. Produced from generative adversarial networks (GANs) that generate new images from existing datasets of images, deepfake videos can challenge people’s perception of realities, confusing our ability to discern what is true from false.

Deepfake technology is mostly used in videos involving pornography, political propaganda, and satire. As for the impact of these videos, a Medium article published in May claimed that there were about 10,000 deepfake videos online, with House Speaker Nancy Pelosi and Hollywood star Scarlett Johansson being two of their popular subjects/victims.

Regarding the use of this technology in profit-driven cybercrime, it can be surmised that deepfake videos may be used to create a new variation of business email compromise (BEC) or CEO fraud. In this variation of the scheme, a deepfake video can be used as one of its social engineering components to further deceive victims.


But the first reported use of deepfake technology in CEO fraud came in the form of audio. In September, a scam involving a deepfake audio was used to trick a U.K.-based executive into wiring US$243,000 to a fraudulently set-up account. The victim company’s insurance firm stated that the voice heard on the phone call was able to imitate not only the voice of the executive being spoofed, but also the tonality, punctuation, and accent of the latter.

Brute force and social engineering methods are old but popular techniques that cybercriminals use to steal passwords and hack user accounts. New ways to do this could be inadvertently aided by user information shared on social media – some still embed publicly shared information into their account passwords. Additionally, machine learning research on password cracking is an area of concern that users and enterprises should closely pay attention to.

Back in 2017, one of the early proofs of machine learning’s susceptibility to abuse was publicized in the form of PassGAN — a program that can generate high-quality password guesses. Using a GAN of two machine learning systems, experts from the Stevens Institute of Technology, New Jersey, USA, were able to use the program to guess more user account passwords than popular password cracking tools HashCat and John the Ripper.

To compare PassGAN with HashCat and John the Ripper, the developers fed their machine learning system more than 32 million passwords collected from the 2010 RockYou data breach, and let it generate millions of new passwords. Subsequently, it attempted to use these passwords to crack a hashed list of passwords taken from the 2016 LinkedIn data breach.

The results came back with PassGAN generating 12 percent of the passwords in the LinkedIn set, while the other tools generated between 6 percent and 23 percent. But when PassGAN and Hashcat were combined, 27 percent  of the passwords from the LinkedIn set were cracked. If cybercriminals are able to devise a similar or enhanced version of this methodology, it could be a potentially reliable way to hijack user accounts.


Adversarial machine learning is a technique that threat actors can use to cause a machine learning model to malfunction. They can do so by crafting adversarial samples, which are modified input fed to the machine learning system to mess up its ability to predict accurately. In essence, this technique — also called an adversarial attack — turns the machine learning system against itself and the organization running it.

This method has been proven capable of causing machine learning models for security to perform poorly, for example, by making them produce higher false positive rates. They can do this by injecting malware samples that are similar to benign files to poison machine learning training sets.

Machine learning models used for security can also be tricked using infected benign Portable Executable (PE) files or a benign source code compiled with malicious code. This technique can make a malware sample appear benign to models, preventing security solutions from accurately detecting it as malicious since its structure is still mostly comprised of the original benign file.

When it comes to dealing with advanced password cracking tools such as the machine learning-powered PassGAN, users and organizations can move towards two-factor authentication schemes to reduce their reliance on passwords. One approach to this is using a one-time password (OTP) — an automatically generated string of characters that authenticates the user for a single login session or transaction.


Meanwhile, technologies are continuously being developed to defend against deepfakes. To detect deepfake videos, experts from projects initiated by the Pentagon and SRI International are feeding samples of real and deepfake videos to computers. This way, computers can be trained to detect fakes. 

To detect deepfake audio, experts are training computers to recognize visual inconsistencies. And as for the platforms where deepfakes can creep in, Facebook, Google, and Amazon, among other organizations, are joining forces to detect them via the DeepFake Detection Challenge (DFDC) — a project that invites people around the world to build technologies that can help detect deepfakes and other forms of manipulated media.

Adversarial attacks, on the other hand, can be prevented by making machine learning systems more robust. This can be done in two steps: First, by spotting potential security holes early on in its design phase and making every parameter accurate, and second, by retraining models via generating adversarial samples and using them to enhance the efficiency of the machine learning system. 

Reducing the attack surface of the machine learning system can also ward off adversarial attacks. Since cybercriminals modify samples in order to probe a machine learning system, cloud-based solutions, such as products with Trend Micro XGen security, can be used to detect and block malicious probing.

Governments and private organizations, particularly cybersecurity companies, should anticipate a new era where cybercriminals use advanced technologies such as machine learning to power their attacks. As they have done in the past, cybercriminals will continue to develop more advanced and new forms of threats to be one step ahead. In this light, technologies for combating these threats should likewise continue to evolve. 

However, while it would be a good choice to implement a tailor-fit technology to detect such threats, a multilayered security defense (one that combines a variety of technologies) and the consistent application of cybersecurity best practices are still the most effective ways to defend against a wide range of threats.

Masimo secures FDA clearance for neonatal RD SET Pulse Oximetry sensors with improved accuracy specifications

Masimo announced that RD SET sensors with Masimo Measure-through Motion and Low Perfusion SET pulse oximetry have received FDA clearance ...