After more than two decades of working in the private sector, Sara Gopalan saw firsthand the gap between what students were learning and what was needed to be successful in the cybersecurity field.

She decided to take matters into her own hands and become a teacher herself. In just a few years, she has become an integral part of the cybersecurity education community in the Inland Empire region.

Gopalan is a Career Technical Education (CTE) Teacher at Temecula Valley High School (TVHS), where she specializes in Information and Communications Technologies pathway. 

Her mother was a teacher in India and, while she didn’t initially see herself following in those footsteps, she’s glad that she did.

Gopalan started small by volunteering to teach a free course on computer fundamentals at her local library. She also became involved with CoderDojo, a worldwide network of volunteers who teach programming to children ages 7-17, and joined the CTE Advisory Committee at Temecula Valley Unified School District.

That work eventually led to an offer to become a CTE teacher. While Gopalan had decades of professional experience, she did not have teaching credentials. She found a program that allowed her to obtain them in six months and she quickly hit the ground building a three-course IT/cybersecurity pathway based on Cisco’s Network Academy curriculum.

In an effort to bridge the gap she saw between industry and education, Gopalan integrates guest speakers and soft skills like interviewing and teamwork into her classes.

“Because my course is part of career technical education, I need to connect it to what they’ll see in the workplace,” she said. “If they have to hire someone in the workplace, what are the skills they are looking for?”

Gopalan’s passion has positioned TVHS as a leader in the region and one that’s poised to become a statewide presence.

“Sara is a go-getter. She is incredibly resourceful and thorough in her work,” said Kim Randall, CTE Department Chair at TVHS. “She has brought her industry expertise to the classroom at TVHS, and we are so fortunate to have her working with students in this ICT pathway.”

In addition to launching a new pathway, Gopalan also helped build a CyberPatriot presence at her school. She recalls meeting with California Cyberhub Community Manager Donna Woods and wanting to emulate the success she’s created at Moreno Valley High School. 

“The students I saw were so engaged the whole time, and I was so impressed by how much they had prepared ahead of time,” Gopalan said. “They had binders of notes and highlights on every page.”

Gopalan worked with Susanne Mata, ICT-DM Deputy Sector Navigator in the Inland Empire/Desert Region,  to obtain funding for what are now five CyberPatriot teams in the Temecula Valley Unified School District.

Karen Walker coaches one of those teams at Chaparral High School. She said Gopalan’s leadership and guidance made her transition into coaching very smooth.

“Sara has helped me immensely with CyberPatriot,” Walker said. “She has done most of the work researching what we needed to do and how to do it.  She also has usually been the one to fill out all the necessary paperwork for our district, both to get our teams funded and to get our buses to competitions.”

Gopalan said she’s learned a lot from Cyberhub community members like Irvin Lemus. She hopes that the Inland Empire will be able to achieve the success Lemus and his colleagues have in the Bay Area. She also hopes to collaborate with middle schools to establish strong feeder courses in ICT pathway in the Temecula Valley Unified School District.

 “It is amazing to see how much the awareness about Cybersecurity has grown in the past two years,” she said. “We’ve gained great momentum, and I want to pass it on to middle schools so students will be prepared to try these classes and activities when they get to high school.”

 

Originally Posted On: singularityhub.com

We tend to compartmentalize our understanding of the world into “subjects.” From a very young age, we are misled to believe that science is separate from art, which is separate from history, which is separate from economics, and so on.

However, a true understanding of the world and our place in it requires interconnections between many disciplines and ways of thinking. Global challenges, whether they be climate change or wealth inequality, cannot be tackled with a single isolated discipline, but rather require a convergence of subjects and thinking tools.

Such is where the convergence of science and arts is actually more natural than is often assumed. STEAM is an educational approach to learning that combines science, technology, engineering, the arts, and mathematics. The arts in this context refer not only to the fine arts, but also to the liberal arts and humanities.

STEM education by itself misses the development of critical 21st-century skills that are required as we head towards the Imagination Age and a creative economy. STEAM doesn’t only result in more meaningful learning, but also contributes to more divergent thinking, and consequently, creative innovation. It is also a powerful tool for communicating scientific thought and global issues to the general public.

The below artistic projects celebrate harmony between science and the arts.

CLIMATE CHANGE REIMAGINED 

Art can be used as a powerful call for action. One strategy is to use art to induce fear in audiences by demonstrating the consequences of inaction. An equally powerful approach is to inspire audiences by providing an exciting vision for the future.

An interactive and immersive exhibition at the World Government Summit in Dubai, Climate Change Reimagined, envisions a desirable future where we have “not only survived the challenges of climate change in the mid-21-st century, but have thrived.” The exhibition highlights the global threat of climate change and the urgent need for innovative solutions. It reframes the biggest problems contributing to humanity’s ecological footprint as an opportunity for radical innovation. 

SYMPHONY OF SCIENCE

It’s rare to meet someone who doesn’t have some appreciation for music. While there are countless genres out there, a love for music appears to be a universal human trait. Symphony of Science is a project that aims to “spread scientific knowledge and philosophy through musical remixes.”

Created by Washington-based electronic musician John D. Boswell, the remixes include audio and video samples from television programs featuring popular scientists, such as Carl Sagan, Richard Feynman, Neil deGrasse Tyson, Bill Nye, Stephen Hawking, and many more.

“Waves of Light” is a remix video that celebrates the beautiful fact that we are able to study the entire history of the universe through the power of light. Its chorus is:

Gaze up into the night sky. Capture the light and read the story of the universe.Isn’t it a wonderful thing? We are part of the universe. Isn’t it a wonderful thing? The story of the universe is our story. Carried on waves of light.Wave after wave after wave of light. All the colors of the rainbow, colors of the rainbow.

ANIMATING THE QUANTUM WORLD

The vastness of the universe can often leave us feeling small and insignificant, but then we must remember that we are astronomically massive compared to the microscopic or even quantum world. At such minuscule scales, we are the universe.From particles popping out of nowhere to light appearing as particles and waves, the quantum world is both wondrous and confusing. It breaks down all of our intuitions about the nature of reality. In the words of legendary physicist Richard Feynman, “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” The quantum world is impossible to observe directly and is studied by indirect methods of observation, such as measuring the remains of proton collision in the laboratory.With his short film Quantum Fluctuations: Experiments in Flux, artist Markos Kay has taken on the challenge of the impossible with a daring attempt to animate and visualize the quantum world. With visuals and music that appear to be from a different dimension, Kay visualizes various quantum phenomena such as particle decay and proton showers.

FEELING HUMAN

How do the mind and body work to make us feel pain? What role does our psychology play in our physical sensations? What is the evolutionary advantage of pain? Thanks to the hard work of researchers, we have an expansive accumulation of studies and theories that can shed light on these questions—but what better way to communicate the answers than to use art?Exhibited at the MOD Futures Gallery in Australia, “Feeling Human” is an immersive multi-sensory exhibit that uses different technologies to give visitors experiences that provide a window into findings from centuries of pain research. In the process, attendees of the exhibit learn about the biological, psychological, and social influences of pain. As the gallery says, “Welcome to a dark, sensory world where stories of pain come to life…” To feel pain is to feel human, and a better understanding of the nature of pain can contribute to a better understanding of ourselves. 

MIDNIGHT STAR

An art project from 2017’s Burning Man, Midnight Star is an experiential art piece that aims to give burners a cosmic perspective. All night long, seven illuminated rings pulse rhythmically with ambient outer space sounds, and at midnight, the Big Dipper’s stars glide into perfect alignment, with seven red rings of the installation hovering above the playa.Embedded into the installation experience is a midnight ritual. The artists and physicists of the team invite participants to take “a guided meditative tour of the night sky.” The experience combines secular spirituality, music (original scores that integrate space recordings and excerpts from talks by notable astrophysicists), and physics. In the process, participants explore the nature of the universe and take a moment to appreciate our place within it all.Every year, Burning Man’s incredible installations are possible because of the collaboration between artists and scientists. They are not only symbols of radical self-expression, but also human ingenuity.

SCIENCE NEEDS ARTISTS

We owe much of human advancement to those who have dedicated their lives to the pursuit of knowledge and understanding. The scientific method continues to lead to greater understanding of ourselves and the world, which in return fuels innovation and global progress.In the 1997 American film adaption of Carl Sagan’s Contact, SETI Scientist Dr. Elli Arroway realizes the power of art after she travels through multiple wormholes to see what appears to be signs of an advanced civilization on another planet. The sheer awe of the experience leads Dr. Arroway to say, “Some kind of celestial event. No—no words. No words to describe it. Poetry! They should have sent a poet.” It’s no shock that Elon Musk and his SpaceXteam are sending artists to the moon.Scientific advancement on its own is not enough. We need philosophers to help us explore the implications of groundbreaking scientific findings. We need filmmakers to allow us to visualize and imagine counter-intuitive insights. We need poets to highlight the awe and wonder that is associated with it all. The convergence of science and art will help us advance as a species.Image Credit: NASA images / Shutterstock.comAbout Author: Raya is the Founder & CEO of Awecademy, an online platform that gives young minds the opportunity to learn, connect and contribute to human progress. She is a writer and regular speaker on the topics of innovative education, the future of work and the effects of exponential technologies on society.

Originally Posted On: threatpost.com

Called BleedingBit, this vulnerability impacts wireless networks used in a large percentage of enterprise companies.

UPDATE

Two zero-day vulnerabilities in Bluetooth Low-Energy chips made by Texas Instruments (and used in millions of wireless access points) open corporate networks to crippling stealth attacks.

Adversaries can exploit the bugs by simply being approximately 100 to 300 feet from the vulnerable devices. A compromised access point can then lead to an attacker taking control of the access point, capturing all traffic, and then using the compromised device as a springboard for further internal attacks.

The issue impacts Wi-Fi access points made by Cisco, Cisco Meraki and Hewlett-Packard Enterprise’s Aruba, accounting for a large percentage of hardware used in corporations, according to researchers at Israeli security firm Armis. The firm discovered the two bugs earlier this year and publicly disclosed them on Thursday

“Attacks can be devastating and carried out by unauthenticated users who can exploit these bugs and break into enterprise networks undetected while sitting in the company’s lobby,” said Ben Seri, head of research at Armis.

Texas Instruments released patches (BLE-STACK SDK version 2.2.2) for affected hardware on Thursday that will be available via OEMs. Cisco is expected to release patches for three Aironet Series wireless access points (1542 AP, 1815 AP, 4800 AP), along with patches for its Cisco Meraki series access points (MR33, MR30H, MR74, MR53E), on Thursday. And Aruba has released a patch for its Aruba 3xx and IAP-3xx series access points.

According to Aruba, “the vulnerability is applicable only if the BLE radio has been enabled in affected access points. The BLE radio is disabled by default.”

Cisco representatives told Threatpost that the BLE feature is disabled by default on its Aironet devices.

Aruba is advising its affected customers to disable the BLE radio to mitigate the vulnerability.

“Fixed software was published for all of Cisco’s affected products prior to Nov. 1. A PSIRT advisory was published at the time of the researcher’s disclosure today via our established disclosure page. Meraki also published an advisory in the customer dashboard, and documentation is available to disable to involved settings,” Cisco said in an email to Threatpost.

“The vulnerability can be exploited by an attacker in the vicinity of the affected device, provided its BLE is turned on, without any other prerequisites or knowledge about the device,” according to researchers. The attacker does not need to be on the network, he or she just needs to be within range of access point and the BLE broadcasts/beacons.

The first vulnerability (CVE-2018-16986) is tied to Texas Instrument chips cc2640/50 used in Cisco and Cisco Meraki access points. This vulnerability is a remote code-execution flaw in the BLE chip and can be exploited by a nearby unauthenticated hacker.

“First, the attacker sends multiple benign BLE broadcast messages, called ‘advertising packets,’ which will be stored on the memory of the vulnerable BLE chip in targeted device,” researchers said. “Next, the attacker sends the overflow packet, which is a standard advertising packet with a subtle alteration – a specific bit in its header turned on instead of off. This bit causes the chip to allocate the information from the packet to a much larger space than it really needs, triggering an overflow of critical memory in the process.”

Leaked memory is then leveraged by attackers to facilitate the running of malicious code on the chip. A backdoor is opened up on the chip, which an attacker can then use to command the chip wirelessly. From there, he or she can manipulate the main processor of the wireless access point and take full control over it locally and then remotely.

“The Texas Instrument chips are so common that an attacker could simply walk into a lobby of a company, scan for available Wi-Fi networks and begin the attack, on the assumption the BLE vulnerability is present,” said Nadir Izrael, CTO and co-founder of Armis.

A second vulnerability (CVE-2018-7080) was discovered by Armis in Texas Instrument’s over-the-air firmware download feature used in Aruba Wi-Fi access point Series 300 that also uses the BLE chip.

“This vulnerability is technically a backdoor in BLE chips that was designed as a development tool, but is active in these production access points,” according to Armis. “It allows an attacker to access and install a completely new and different version of the firmware — effectively rewriting the operating system of the device.”

Researchers said the second vulnerability exists because the over-the-air security mechanism can’t differentiate between “trusted” or “malicious” firmware updates. By installing their own firmware update, an attacker can gain a foothold on the hardware and take over the access points, spread malware and move laterally across network segments, researchers said.

The vulnerabilities were collectively given the name BleedingBit from the way researchers were able to overflow packets at the bit level in the BLE memory module.

BLE is a relatively new Bluetooth protocol designed to for low-power consumption devices such as IoT hardware. It’s significant for a number of reasons, such as its mesh capacities, but also for the fact it evolves the protocol from consumer uses (headphones and smartphone data transfers) to commercial IoT uses.

For this reason, Seri said there is concern that the BleedingBit vulnerabilities could impact a larger universe of BLE devices, such as smart locks used in hotel chains and point-of-sale hardware.

Last year, Armis discovered a nine zero-day Bluetooth-related vulnerabilities, dubbed BlueBorne, in Bluetooth chips used in smartphones, TVs, laptops and car audio systems. The scale of affected devices was massive, estimated to impact billions of Bluetooth devices.

(This article was updated with a comment from Cisco Systems on Friday 11/2 at 1pm ET)

Originally Posted On: techrepublic.com

The explosion of data in consumer and business spaces can place our productivity at risk. There are ways you can resist drowning in data.

The pace of data creation steadily increases as technology becomes more and more ingrained in people’s lives and continues to evolve.

According to Forbes.com last May, “there are 2.5 quintillion bytes of data created each day at our current pace, but that pace is only accelerating with the growth of the Internet of Things (IoT). Over the last two years alone 90 percent of the data in the world was generated.”

While technology should make our lives easier, the information it provides can negatively impact our mental function by overwhelming us with too much input.

However, don’t confuse cognitive overload with work overload. Whereby work overload is simply having too much to do and not enough time to complete it, cognitive overload refers to having too much information to process at once.

SEE: Leadership spotlight: How to make meetings worthwhile (Tech Pro Research)

Fouad ElNaggar, co-founder and CEO of Sapho, an employee experience software provider based in San Bruno, Calif., is passionate about cognitive overload. Together we developed some tips for workers on how to fix the problem.

1. CLOSE/SHUT OFF DISTRACTING APPLICATIONS

The irony of productivity applications is that they can actually make you less productive. Microsoft Office includes Outlook, an email application, which can “helpfully” notify you when new email arrives.

Sadly, this can also contribute to your information overload if you’re in the middle of a task, and you switch to Outlook to read an email. You might even forget about the current task you’re working on. Instant messaging apps, or frankly, anything that dings or pops up an alert are just as distracting. When trying to stay focused on a task, close or shut off any applications which could serve as potential distractions. Oh, and silence your phone, too.

2. SWITCH OFF PUSH NOTIFICATIONS

If you can’t close a potentially distracting application because you need it available, you can still quiet it down. Between Slack, Gchat, calendar, email and text messages, it probably seems like those tiny dialog boxes pop up on your screen all day long. Take a few minutes to evaluate which push notifications actually help you get work done, and turn off the rest.

SEE: Project prioritization tool: An automated workbook (Tech Pro Research)

3. BUCKET YOUR EMAIL CORRESPONDENCE

Constantly checking and responding to email is a major time drain. Set aside two times a day to answer emails, and do not check it any other time. Put your phone on “Do Not Disturb,” and make it a point to not let notifications interrupt you during that time.

4. STAY OFF PERSONAL SOCIAL MEDIA/NEWS SITES/OTHER TEMPTATIONS

It’s easy and tempting to check social media, or your favorite news outlet while working, especially if you’re waiting for a task to finish before you proceed (such as rebooting a server or uploading a file). However, this just puts more data into your current memory banks, so to speak, so that instead of thinking about that server patching project now you’re also thinking about the NFL draft or how many people “like” your funny Facebook meme. Save social media for lunch time or after work. It’ll be more meaningful, and you can keep your work and recreation separate, as it should be.

5. UTILIZE MINIMALISM

I keep a very minimalistic workspace: a family picture, a Rick Grimes (from “The Walking Dead,” which contains many parallels to IT life) keychain figure, and a calendar. No fancy furniture, no posters, no inspiring slogans, and no clutter. This helps me stay oriented to what I need to do without the sensory overload.

I also apply the same principles to my computer: I only keep programs running which I need, and even close unnecessary browser tabs, SSH sessions, and Windows explorer windows so that I’m only concentrating at the task at hand.

SEE: IT jobs 2018: Hiring priorities, growth areas, and strategies to fill open roles (Tech Pro Research)

6. AVOID MULTITASKING

You may not have a choice, but avoiding to multitask is one of the best things you can do to keep your brain from being overwhelmed. Dividing your attention into four or five parallel tasks is a sure-fire way to ensure that those tasks take longer or end up being completed less efficiently than if you accomplished these things one at at time. Worse, it’s all too easy to drop tasks entirely as your attention span shifts, resulting in uncompleted work.

7. UTILIZE DOCUMENTATION

Document your to-do lists, operational processes, and daily procedures you need to follow (building a new server, for instance) so that you don’t rely on memory and can quickly handle tasks—or better yet—refer them to someone else. Anytime I discover how something works or what I can improve upon I update the related electronic documentation so I don’t have to comb through old emails, leaf through handwritten notes, or worse, ask coworkers or fellow employees to fill in missing details that I should have recorded.

8. TAKE NOTES AS YOU GO

In addition to relying upon established documentation to make your efforts more productive, take notes during difficult operations such as a server recovery effort or network troubleshooting endeavor. It helps to serve as a “brain dump” of your activities so that you can purge them from memory and refer to this information later, if needed.

Believe me, there’s nothing more challenging then sorting through a complex series of tasks during an outage post-mortem to recall what you did to fix the problem. A written record can save your brain.

SEE: Comparison chart: Enterprise collaboration tools (Tech Pro Research)

9. TAKE ROUTINE BREAKS

This should be a no-brainer, yet too many people consider themselves too busy to take a break, when doing so allows you to step away from work and hit the “pause” button. It’s not just about relaxing your brain so that you return to work with a more productive mindset, but a quick walk around the building might be beneficial in allowing you to think and come up with new ideas or solutions to problems you’re facing, thereby eliminating one more area of information overload.

10. AVOID OPEN SPACE SEATING AREAS

I’ve written about some of the problems of the infamous (and unfortunately common) open-seating plan in companies. In a nutshell, having no privacy and sitting in close physical and audial proximity even to individuals considered close friends strains working relationships and breeds frustration.

Avoiding cognitive overload isn’t just about not taking on or dealing with too much at once, but it’s also about not letting other people’s activities intrude upon your own productivity. Whether it’s an annoying personal phone call, playing music or even just chewing loudly, other people’s nearby activity can be a source of unwanted details, which reduces your capacity to do your job. You may not have a choice about sitting in an assigned open space seat, but take advantage of opportunities such as working from home, using an available conference room, or moving to an empty part of the office when you really need to focus.

11. BREAK PROJECTS DOWN INTO CHUNKS

Facing the entirety of a complex project is a daunting mission. It’s better and more effective to break a project down into subcomponents, and then focus on these separately, one at a time.

For instance, say you want to migrate users, computers, and services from one Active Directory domain to another. This would be overwhelming to focus on at once, so the best way to proceed is to divide the project into tasks. One task could be migrating user accounts and permissions. The next task could be migrating computer accounts, and the task after that could be addressing DNS changes, and so on. Plan it out in advance, and then tackle it piece-by-piece.

12. CONTROL YOUR CALENDAR

Don’t let colleagues fill in your day with meaningless meetings. Have a conversation with your coworkers about which meetings are absolutely necessary for you to participate in and skip the rest. If you are a manager or leader, encourage your employees to schedule in-person meetings only when they are absolutely necessary.

13. DON’T TAKE YOUR PHONE INTO YOUR BEDROOM

You spend enough time on screens during the day. The simple act of charging your phone in another room gives you time to really disconnect. It also gives you a chance to wake up refreshed, and think about the day ahead before reactively reaching for your device and checking social media or email.

SEE: Research: The evolution of enterprise software UX (Tech Pro Research)

REDUCING TEAM COGNITIVE OVERLOAD

ElNaggar and I also thought of a couple of tips for business leaders on ways to reduce cognitive overload for their team. These tips include:

14. INVEST IN THE RIGHT TECHNOLOGY

Take the time to learn what processes or tools are pain points for your employees’ productivity. Research which solutions can automate certain tasks or limit daily distractions and implement them across your workforce.

15. EMBRACE EMPLOYEE-CENTRIC WORKFLOWS

ElNaggar says that leaders “embrace the idea that employee experience matters, which will have a ripple effect in their organization.” He recommends that leaders start to develop more employee-centric workflows that reduce interruptions for their employees to help them focus on priorities and accomplish more work.

An example of an employee-centric workflow would be a business application or web portal, which gives employees a single, actionable view into all of their systems and breaks down complex processes into single-purpose, streamlined workflows, allowing employees to be more productive.

“Without leadership teams championing an employee-centric mindset, nothing will really change in the mid and lower levels of a company. Business leaders must start thinking about the impact their employees’ digital experience has on their work performance and overall satisfaction, and support the idea that investing in employee experience will drive employee engagement and productivity,” ElNaggar concluded.

Originally Posted On: informationweek.com

Agile, DevOps, Continuous Delivery and Continuous Development all help improve software delivery speed. However, as more applications and software development tools include AI, might software developers be trading trust and safety for speed?

The software delivery cadence has continued to accelerate with the rise of Agile, DevOps and continuous processes including Continuous Delivery and Continuous Deployment. The race is on to deliver software ever faster using fewer resources. Meanwhile, for competitive reasons, organizations don’t want to sacrifice quality in theory, but sometimes they do in practice.

Recognizing the need for speed and quality, more types of testing have continued to “shift left.” Traditionally, developers have always been responsible for unit testing to ensure the software meets functional expectations, but today, more of them are testing for other things, including performance and security. The benefit of the shift-left movement is the ability to catch software flaws and vulnerabilities earlier in the lifecycle when they’re faster, easier and cheaper to fix. That’s not to say that more exhaustive testing shouldn’t be done; shift-left testing just ensures that fewer defects and vulnerabilities make their way downstream.

Enter AI. More developers are including artificial intelligence in their applications, and they’re also using more AI-powered tools to do their work. Granted, not all forms of AI are equally complex or intelligent; however, the level of intelligence embedded in products continues to increase. The danger is that developers and software development tool vendors are racing to implement AI without necessarily understanding what it is they’re implementing or the associated risks.

“In my first foray into applied AI, we had to consider the implications of interfacing to triply-redundant flight control systems and weapons that kill,” said Gregg Gunsch, a retired US Air Force lieutenant colonel and retired college professor with over 20 years of experience teaching and leading research in applied artificial intelligence and machine learning, information security for computer science/engineering majors and digital forensic science. “That tended to instill a strong ‘seriously test before release’ attitude.”

Not every developer is building software with life-and-death consequences, but many are building applications and firmware that can have material impacts on end users, the enterprise, customers, partners, governments and more. Given that some forms of AI may yield unpredictable results because of the way they’re designed or because there are flaws or bias in the data, the question is whether the ongoing quest for ever-faster software delivery is practical, and if it is, whether it’s wise.

“I get concerned about putting guardrails in now, or we may miss what’s happening and then realize where the bots went wrong,” said Scott Likens, new services and emerging tech leader at PwC.

Value drives the need for speed

Part of the continuous process mantra is delivering value quickly to consumers for competitive reasons. However, quick execution and an “innovation at any cost” mentality also produces broken user experiences and functional gaffes that end users would happily trade for better quality software that’s delivered less frequently.

“I am very tired of being an uninformed beta test subject, but I recognize that crowdsourcing of some kinds [must] happen to collect the data necessary for training the systems and steering development. Rapid-prototyping around the user is a key tool in design engineering,” said Gunsch. “Sometimes, there may not be other good ways to collect the massive amounts [of data] needed for learning systems besides just experimenting on the entire user population.”

Attitudes about speed-quality tradeoffs differ around the world. According to PwC’s Likens, speed trumps quality in China, but the same is not true in the U.S.

“We had the social media wave where consumers wanted that instant change, but now they’re almost revolting against how often things change,” said Likens.

User attitudes also vary based on the nature of the application itself. For example, consumers expect banking applications to be reliable and secure, but they don’t have the same expectations of a social media selfie app.

“You’ve had data breaches and data leakage and now consumers are willing to accept less to be protected,” said Likens. “You can’t innovate at all costs for core enterprise [or core consumer] apps.”

Will AI help or hinder software delivery speed?

The potential risks of self-learning AI seem to indicate that trust should be included in shift-left practices and software development processes in general. While it may add yet another factor to consider earlier in the software development lifecycle (SDLC), embedding trust into processes would help ensure that this new element is executed efficiently. In fact, AI may be part of the solution that ensures that trust is not only contemplated but validated and verified.

Already, AI is being used in parts of the SDLC, such as automated software testing tools that use AI to ensure better test coverage and to prioritize what needs to be tested. It’s also driving higher levels of efficiency by enabling more tests to be run in shorter timeframes.

According to Likens, machine learning and computer vision can produce effective UI designs because the system can look at far more permutations than a human could and generate code from it.

“Now you’re seeing AI is generating code at a level that’s human-usable. A lot of stuff we do on the UI we can do at a high quality level because we feed in unbiased training data and hand-drawings, something that machine learning vision can recognize as something that looks good,” said Likens.

Not all aspects of software development and delivery have been automated using AI yet, but more will be automated over time as tools become more sophisticated and software development practices continue to evolve. AI can help accelerate software delivery, but its application will be more valuable if that speed can be matched with elements of quality which include security and trust.

For more on the trends in software development and AI, check out these recent articles.

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include … View Full Bio

Originally Posted On: blog.360totalsecurity.com

Background
On October 18, 2018, 360 Threat Intelligence Center captured for the first time an example of an attack using the Excel 4.0 macro to spread the Imminent Monitor remote control Trojan. Only 10 days after the security researchers of Outflank, a foreign security vendor, publicly used Excel 4.0 macros to execute ShellCode’s exploit code for the first time on October 6, 2018. Although Excel 4.0 macro technology has been released for more than 20 years, and is often used to make macro viruses early in the technology, in fact, Microsoft has long used VBA macros (Visual Basic for Applications) instead of Excel 4.0 macro technology. This leads to Excel 4.0 macros not being well known to the public. Also, because Excel 4.0 macros are stored in the Workbook OLE stream in Excel 97-2003 format (.xls, composite binary file format), this makes it very difficult for anti-virus software to parse and detect Excel 4.0 macros.

360 Threat Intelligence Center analyzed in detail how Excel 4.0 macros are stored in Excel documents, and through in-depth research found that after using some techniques to hide Excel 4.0 macros and perform some specially processed ShellCode, you can completely avoid almost all antivirus. The software statically and dynamically kills and executes arbitrary malicious code. Since the new utilization technology based on Excel 4.0 macro has been published, and the use of this technology to spread the remote use of the remote control Trojan has emerged, 360 Threat Intelligence Center released the analysis report and reminded to prevent such attacks.

360 Threat Intelligence Center constructed an Exploit sample that can execute any malicious code remotely by deeply analyzing how Excel 4.0 macros are stored in a composite binary file format. After testing, it is found that many well-known anti-virus software cannot detect such samples.

The analysis of attack samples for disseminating Imminent Monitor remote control Trojan
360 Threat Intelligence Center first captured the attack sample of the Imminent Monitor remote control Trojan using Excel 4.0 macro on October 18, 2018. Only one anti-virus software can be killed on VirusTotal:

2 4

The Excel 4.0 malicious macro code is hidden in the table, and the Excel 4.0 macro code by selecting Unhide can be seen below:

3 3

The macro code will be from:
Hxxps://jplymell.com/dmc/InvoiceAug5e1063535cb7f5c06328ac2cd66114327.pdfDownload the file suffixed with PDF and execute it. The file is actually a malicious msi file. After execution by msiexec, it will decrypt and release a .NET type executable file in the %temp% directory, named 033ventdata.exe and execute:

5 3

vBM= in Form1 will call gRQ= function:

6 3

The gRQ= function will first obtain some configuration information, including the CC address to be connected: linkadrum.nl, and determine whether the current process path is “%temp%\ProtectedModuleHost.exe”, if not, move the current file to the directory, and Delete the current process file:

7 3

If the process path is consistent, the corresponding LNK self-starting file is generated in the startup directory to implement self-starting:

8 3

After that, start the process InstallUtil.exe and inject the host PE file of the Trojan:

9 3

The analysis of Trojan main control 
The injected Trojan master PE file is also a .NET program. After running, it will load 7z LZMA library DLL, and then call lzma library to decompress the Trojan host EXE carried by itself to load into memory. The EXE has strong confusion. After the memory is loaded, it will go online through linkadrum.nl and accept instructions to implement the complete remote control function:

10 2

After decompilation, you can also see the obvious string features: “Imminent-Monitor-Client-Watermark”

11 2

The Imminent Monitor RAT is a commercial remote control software. The official website is imminentmethods.net, which basically covers all remote control functions:

12 2

Recommendation
From the analysis of advanced attack events in recent years, it can be seen that due to the high cost of exploiting vulnerabilities such as Office 0day, most attackers tend to use the Office VBA macro to execute malicious code. For this open Excel 4.0 macro utilization technology will bring new challenges to killing and killing.

Enterprise users should be as cautious as possible to open documents of unknown origin. If necessary, disable all macro code execution by opening the File: Options – Trust Center – Trust Center Settings – Macros setting in Office Excel.

At present, 360 Threat Intelligence Center has supported the use of such attacks and samples of such exploit-free exploits. Additionally, 360 Threat Intelligence Center’s self-developed killing engine can also statically extract macros from attack samples and ShellCode exploit code:

13 3

Learn More About 360 Total Security Here.

From autonomous things and blockchain to quantum computing; how many of these technologies are you ready for?

This article originally appeared on ZDNet

Tech analyst firm Gartner has compiled a list of the top ten strategic technology trends that organisations need to explore in 2019. According to Garner, these technologies have substantial disruptive potential and are either on the edge of making a big impact, or could reach a tipping point in the next five years.

Some of these trends will be combined: “Artificial intelligence (AI) in the form of automated things and augmented intelligence is being used together with the Internet of Things (IoT), edge computing and digital twins to deliver highly integrated smart spaces,” explained Garner vice-president David Cearley.

THE ANALYST FIRM’S TOP 10 STRATEGIC TECHNOLOGY TRENDS FOR 2019 INCLUDE:

1. Autonomous things

This includes robots, drones and autonomous vehicles that use AI to automate functions previously performed by humans. The next shift is likely to be from standalone intelligent things to swarms of collaborative devices working either independently or with human input, Gartner predicts. For example, a drone could decide that a field is ready for harvesting, and dispatch a robot harvester. “Or in the delivery market, the most effective solution may be to use an autonomous vehicle to move packages to the target area. Robots and drones on board the vehicle could then ensure final delivery of the package,” said Cearley.

2. Augmented analytics

Augmented analytics focuses on the use of machine learning to improve how analytics content is developed and used. Gartner said augmented analytics capabilities will quickly go mainstream as part of data preparation, data management, modern analytics, business process management, process mining and data science platforms. As it automates the process of data preparation, insight generation and insight visualisation, it could eliminate the need for professional data scientists in many scenarios.

3. AI-driven development

Developing applications with AI-powered features will become easier, Gartner said, using predefined AI models delivered as a service. Another shift is that AI will be used in the data science, application development and testing elements of the development process. By 2022, at least 40 percent of new application development projects will have AI co-developers on their team, the analyst firm predicts. “Tools that enable non-professionals to generate applications without coding are not new, but we expect that AI-powered systems will drive a new level of flexibility,” said Cearley.

SEE: Tech Pro Research: IT Budget Research Report 2019 (Tech Pro Research)

4. Digital twins

A digital twin refers to the digital representation of a real-world entity or system. By 2020, Gartner estimates there will be more than 20 billion connected sensors and endpoints, and digital twins will exist for potentially billions of things, helping companies to better understand their systems and business processes.

5. Edge computing

Edge computing is a growing area of interest, mostly for now driven by the IoT and the need keep the processing close to the edge of the network rather than in a central cloud server. Over the next five years, specialised AI chips, along with greater processing power, storage and other advanced capabilities, will be added to a wider array of edge devices, Gartner said. In the long term, 5G will offer lower latency, higher bandwidth, and enable more edge endpoints per square kilometre.

6. Immersive experience

Gartner looks beyond virtual reality and augmented reality to a future model of immersive user experience, where we connect with the digital world across hundreds of surrounding edge devices. These include traditional computing devices, wearables, cars, environmental sensors and consumer appliances.

“This multi-experience environment will create an ambient experience in which the spaces that surround us define ‘the computer’ rather than the individual devices. In effect, the environment is the computer,” said Cearly.

7. Blockchain

Current blockchain technologies and concepts are immature, poorly understood and unproven in mission-critical, at-scale business operations, said Gartner. Despite this, the potential for disruption means that CIOs should begin evaluating blockchain, even if they don’t aggressively adopt these technologies in the next few years.

8. Smart spaces

A smart space is a physical or digital environment in which humans and technology-enabled systems interact, like smart citiesdigital workplaces, smart homes and connected factories. Gartner said this area is growing fast, with smart spaces becoming an integral part of our daily lives.

9. Digital ethics and privacy

Companies need to proactively address issues around digital ethics and privacy. “Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ towards ‘are we doing the right thing,” said Cearly.

10. Quantum computing

It’s still very early days for quantum computing, which promises to help find answers to problems too complex for a traditional computers to solve. Industries including automotive, financial, insurance, pharmaceuticals and the military have the most to gain from advancements in quantum computing, and Gartner said that CIOs should start planning for it by understanding how it can apply to real-world business problems with the aim of using it by 2023 or 2025.