Tuesday, December 31, 2019

CloudFlare reveals that this year’s holiday's biggest online shopping day was... Black Friday

CloudFlare revealed that the biggest day of the holiday season for holiday shopping this year was Black Friday, the day after the US Thanksgiving holiday, which has been embraced globally as the day retail stores announce their sales. 

But it was believed that the following Monday, dubbed “Cyber Monday,” may be even bigger. Or, with the explosion of reliable two-day and even one-day shipping, maybe another day closer to Christmas has taken the crown. Cloudflare aimed to answer this question for the 2019 holiday shopping season.

Black Friday was the biggest online shopping day but the second biggest wasn't Cyber Monday... it was Thanksgiving Day itself (the day before Black Friday!), with Cyber Monday taking the fourth position. 


Data shows weekends in yellow and Black Friday and Cyber Monday are shown in green. Viewers can see that checkouts ramped up during Thanksgiving week and then continued through the weekend into Cyber Monday.

Black Friday had twice the number of checkouts as the preceding Friday and the entire Thanksgiving week dominates. Post-Cyber Monday, no day reached 50 percent of the number of checkouts witnessed on Black Friday, and Cyber Monday was just 60 percent of Black Friday.


So, Black Friday is the peak day but Thanksgiving Day is the runner up. Perhaps it deserves its own moniker: Thrifty Thursday anyone?

Checkouts occur more frequently from Monday to Friday and then drop off over the weekend. After Cyber Monday only one other day showed an interesting peak. Looking at last week it does appear that Tuesday, Dec. 17 was the pre-Christmas peak for online checkouts. Perhaps fast online shipping made consumers feel they could use online shopping as long as they got their purchases by the weekend before Christmas.

Saturday, December 28, 2019

Trend Micro envisages a new era as cybercriminals use machine learning to create advanced threats

Cybersecurity companies use machine learning technology to enhance threat detection capabilities that help fortify organizations’ defense against malware, exploit kits, phishing emails, and even previously unknown threats, Trend Micro revealed. 

The Capgemini Research Institute conducted a study on the usage of machine learning for security and found that of 850 senior executive respondents based in 10 countries, around 20 percent started using the technology before 2019, and about 60 percent will be using it by year’s end.


The use of machine learning in cybersecurity — not to mention in many other fields across various industries — has proven to be beneficial. This technology, however, is also at risk of being used by threat actors. While widespread machine learning weaponization may still be far off, research concerning this area, particularly the use of deepfake technology to extort and misinform, have recently become a topic of interest for the IT community and the general public.

To get a clearer picture of what is possible and what is already a reality with regard to machine learning-powered cyberthreats, research on machine learning-powered malware is still surprisingly scarce, considering that some experts have long considered it as a type of threat that can possess advanced capabilities. In fact, only one PoC of such a threat has been publicized, which was unveiled at Black Hat USA 2018. 


IBM presented a variant named DeepLocker that can deploy untraceable malicious applications within a benign data payload. The malware variant is supported by deep neural networks (DNN) or deep learning, a form of machine learning. The use of DNN disguises the malware’s conditions, which are pieces of information that security solutions need to detect malicious payload.

DeepLocker is designed to hide until it detects a specific victim. In the demonstration, DeepLocker was seen stealthily waiting for a specific action that will trigger its ransomware payload. The action that triggered the payload was the body movement of a targeted victim when he/she directly looked at a laptop webcam, which is operated by a webcam application embedded with malicious code. The application of machine learning in this attack can be considered limited, but it showed how malware variants can be highly evasive and targeted when infused with machine learning.


Experts are increasingly warning the public about deepfake videos, which are fake or altered clips that contain hyperrealistic images. Produced from generative adversarial networks (GANs) that generate new images from existing datasets of images, deepfake videos can challenge people’s perception of realities, confusing our ability to discern what is true from false.

Deepfake technology is mostly used in videos involving pornography, political propaganda, and satire. As for the impact of these videos, a Medium article published in May claimed that there were about 10,000 deepfake videos online, with House Speaker Nancy Pelosi and Hollywood star Scarlett Johansson being two of their popular subjects/victims.

Regarding the use of this technology in profit-driven cybercrime, it can be surmised that deepfake videos may be used to create a new variation of business email compromise (BEC) or CEO fraud. In this variation of the scheme, a deepfake video can be used as one of its social engineering components to further deceive victims.


But the first reported use of deepfake technology in CEO fraud came in the form of audio. In September, a scam involving a deepfake audio was used to trick a U.K.-based executive into wiring US$243,000 to a fraudulently set-up account. The victim company’s insurance firm stated that the voice heard on the phone call was able to imitate not only the voice of the executive being spoofed, but also the tonality, punctuation, and accent of the latter.

Brute force and social engineering methods are old but popular techniques that cybercriminals use to steal passwords and hack user accounts. New ways to do this could be inadvertently aided by user information shared on social media – some still embed publicly shared information into their account passwords. Additionally, machine learning research on password cracking is an area of concern that users and enterprises should closely pay attention to.

Back in 2017, one of the early proofs of machine learning’s susceptibility to abuse was publicized in the form of PassGAN — a program that can generate high-quality password guesses. Using a GAN of two machine learning systems, experts from the Stevens Institute of Technology, New Jersey, USA, were able to use the program to guess more user account passwords than popular password cracking tools HashCat and John the Ripper.

To compare PassGAN with HashCat and John the Ripper, the developers fed their machine learning system more than 32 million passwords collected from the 2010 RockYou data breach, and let it generate millions of new passwords. Subsequently, it attempted to use these passwords to crack a hashed list of passwords taken from the 2016 LinkedIn data breach.

The results came back with PassGAN generating 12 percent of the passwords in the LinkedIn set, while the other tools generated between 6 percent and 23 percent. But when PassGAN and Hashcat were combined, 27 percent  of the passwords from the LinkedIn set were cracked. If cybercriminals are able to devise a similar or enhanced version of this methodology, it could be a potentially reliable way to hijack user accounts.


Adversarial machine learning is a technique that threat actors can use to cause a machine learning model to malfunction. They can do so by crafting adversarial samples, which are modified input fed to the machine learning system to mess up its ability to predict accurately. In essence, this technique — also called an adversarial attack — turns the machine learning system against itself and the organization running it.

This method has been proven capable of causing machine learning models for security to perform poorly, for example, by making them produce higher false positive rates. They can do this by injecting malware samples that are similar to benign files to poison machine learning training sets.

Machine learning models used for security can also be tricked using infected benign Portable Executable (PE) files or a benign source code compiled with malicious code. This technique can make a malware sample appear benign to models, preventing security solutions from accurately detecting it as malicious since its structure is still mostly comprised of the original benign file.

When it comes to dealing with advanced password cracking tools such as the machine learning-powered PassGAN, users and organizations can move towards two-factor authentication schemes to reduce their reliance on passwords. One approach to this is using a one-time password (OTP) — an automatically generated string of characters that authenticates the user for a single login session or transaction.


Meanwhile, technologies are continuously being developed to defend against deepfakes. To detect deepfake videos, experts from projects initiated by the Pentagon and SRI International are feeding samples of real and deepfake videos to computers. This way, computers can be trained to detect fakes. 

To detect deepfake audio, experts are training computers to recognize visual inconsistencies. And as for the platforms where deepfakes can creep in, Facebook, Google, and Amazon, among other organizations, are joining forces to detect them via the DeepFake Detection Challenge (DFDC) — a project that invites people around the world to build technologies that can help detect deepfakes and other forms of manipulated media.

Adversarial attacks, on the other hand, can be prevented by making machine learning systems more robust. This can be done in two steps: First, by spotting potential security holes early on in its design phase and making every parameter accurate, and second, by retraining models via generating adversarial samples and using them to enhance the efficiency of the machine learning system. 

Reducing the attack surface of the machine learning system can also ward off adversarial attacks. Since cybercriminals modify samples in order to probe a machine learning system, cloud-based solutions, such as products with Trend Micro XGen security, can be used to detect and block malicious probing.

Governments and private organizations, particularly cybersecurity companies, should anticipate a new era where cybercriminals use advanced technologies such as machine learning to power their attacks. As they have done in the past, cybercriminals will continue to develop more advanced and new forms of threats to be one step ahead. In this light, technologies for combating these threats should likewise continue to evolve. 

However, while it would be a good choice to implement a tailor-fit technology to detect such threats, a multilayered security defense (one that combines a variety of technologies) and the consistent application of cybersecurity best practices are still the most effective ways to defend against a wide range of threats.

CyberScout shares key cybersecurity predictions for 2020; predicts persistent threats in areas of privacy and cybersecurity

As 2019 comes to an end, cybersecurity experts are preparing for a new year—and a new decade—and all the cyber scams, breaches, attacks and privacy concerns that threaten consumers and businesses. CyberScout continues to strengthen defenses against the constantly evolving cyber threats that will shape the 2020 security landscape, encouraging consumers and business owners to stay informed and aware. 


"While consumers and business leaders are more aware of cybersecurity and privacy than ever before, cybercriminals continue to innovate," said CyberScout founder and chairman Adam Levin. "As defenses improve, the attack vectors become more nuanced and technically impressive. You are your best guardian when it comes to your privacy and personal cybersecurity."  

Levin's has listed the following 20 cybersecurity predictions for 2020:
  1. Cybersecurity workforce shortages. There will be a shortage of experts, adding pressure on CISO's charged with tackling an increasing issue environment. With the demand for cybersecurity professionals far exceeding supply, the market will have to start filling openings with less qualified people. 

  1. The disinformation blob will grow. With the success of weaponized misinformation campaigns in the 2016 and 2018 U.S. elections, expect to see more of them in the private sector, with businesses adopting troll farm tricks to hurt the competition. 

  1. Ransomware will continue to thrive. Phishing attacks will continue to lead to ransomware infecting more and more networks. Businesses, municipalities and other organizations will continue to pay whatever they must in order to regain control of their data and systems, and will also see better backup practices that will help minimize or neutralize the threat of these attacks.  

  1. IoT botnets will make dystopian paranoia seem normal. IoT will continue to grow exponentially. In 2020, there will be somewhere around 20 billion IoT devices in use around the world. Unfortunately, many are not secure because they are protected by nothing more than manufacturer default passwords readily available online. They will be weaponized (like in years' past), but with increasing skill and computing power.


  1. The integrity of the U.S. elections will be questioned—for good reason. There are still voting machines in use that are far from secure and would not pass the simplest of audits. Some states continue to use machines that leave no paper trail. Look forward to questions regarding election security all year.  

  1. Cryptocurrency miners will continue to get rich off stolen electricity. Related to the botnet craze, we will see an increase in computing power theft used to mine cryptocurrency. With bots becoming exponentially more effective as the result of AI and cloud computing, a renaissance of Wild West behavior in the global blockchain digital ledger can be expected. 
  
  1. Zero-trust environments will be talked about. A few may exist. The assumption that one can trust the home team—people within one's organization—has been replaced with zero-trust policies. Zero-trust simply means that no one can be trusted, in or outside the organization. With this assumption foremost, new systems make breaches and compromises harder to happen. 

  1. More people will know what "protect surface" means. Protect surface is part of the zero-trust environment. An organization's attackable surface includes every error-prone human in its employ as well as the mistakes in configuration they may have committed along the way and any number of other issues. The protective surface is much smaller and must be kept out of harm's way. The more the subjects is spoken about, the stronger its cybersecurity is expected to be. 

  1. Cars will be frozen. Driverless cars are going to hit things as well as get hit by hackers. Cars that talk to satellites are toast. It's going to happen. (Or not. But it totally could.) 

  1.  5G will make the cyber smash grab a thing.  5G is going to make everything move fast, as will the new generation USB4 devices. With quicker speed, it will take much less time to transfer data. Coincidentally, criminals appreciate this as much as the rest of us.  

  1. Social media will no longer need to be private. Social media companies will probably become a bit more responsible when it comes to the way they gather, store, crunch, analyze and sell our data to marketing companies and small to medium sized businesses looking to connect directly with consumers. 


  1.  State-sponsored traffic jams will be a thing. Hackers are going to target operational systems with an array of tactics that include ransomware and more DDoS attacks that will snarl things up in ways we've not yet seen. The targets will be financial institutions, the power grid, elections, proprietary business information, city services and infrastructure like traffic lights and much more that can wreak havoc on our day to day lives.

  1.  You're going to have personal cyber insurance. Insurance companies will be writing more comprehensive cyber liability policies for businesses and offering innovative personal cyber coverage for consumers. 

  1.  HR will save money by spending some. More employers will offer their employees identity protection products and services as part of their paid or voluntary benefits programs. An employee who has their identity stolen is not very productive and if, as part of that identity theft, their user ID or passwords are exposed, a thief might have what he or she needs to access an employer's network and sensitive databases. 

  1.  The cloud will leak. The parade of stories about misconfigured cloud clients and data stored without any password protection on cloud services will continue apace, perhaps in part because of the CISO and cybersecurity workforce shortage discussed in the first prediction.  

  1.  AI will gladly take one’s job. AI is here and it's willing to work. The CISO shortage as well as many of the innovations discussed in this list of predictions will be increasingly addressed and powered by Artificial Intelligence. 

"Disinformation efforts, election security and continued attacks on local governments and major metropolitan hubs are escalating concerns of how disruptive and dangerous cybercrimes are becoming," continued Levin. "2020 promises to be an interesting ride. Be smart and stay safe by staying informed and seeking cyber insurance protection for you, your family and your business."  

Masimo secures FDA clearance for neonatal RD SET Pulse Oximetry sensors with improved accuracy specifications

Masimo announced that RD SET sensors with Masimo Measure-through Motion and Low Perfusion SET pulse oximetry have received FDA clearance ...