Hardly a week goes by without a report about another cyberattack. With almost every major organization having been victim, including most government organizations, such Target, Sony, NSA, US Office of Personnel Management, why would you expect your organization to be immune? By many projections, the worse is yet to come. Although much progress is being made in improving hardware and software, studies have reported that between 50-70% of all cyberattacks are aided or abetted by insiders (usually unintentionally), so understanding the cybersecurity governance and organizational culture is increasingly important. In this session, we will discuss the managerial, organizational, and strategic aspects of cybersecurity with an emphasis on the protection of the nation's critical infrastructure.
2016 MIT Information and Communication Technologies Conference
Encryption as a means of data control (privacy and security):
For a long time, interaction on Web has been less private or secure than many end-users expect and prefer. Now, however, the widespread deployment of encryption helps us to change that.
* Making encryption widespread. For years we have known how to do encryption, but it wasn't widely used, because it wasn't part of overall system design. In response, particularly as we've become aware of capabilities for network-scale monitoring, standards groups including IETF and W3C have worked to encrypt more of those network connections at the protocol and API-design phase, and to make it easier to deploy and use encrypted protocols such as HTTPS. Encryption won't necessarily stop a targeted attack (attackers can often break end-user systems where they can't brute-force break the encryption), but it raises the effort required for surveillance and forces transparency on other network participants who want to see or shape traffic.
* Secure authentication. Too many of our "secure" communications are protected by weak password mechanisms, leaving users open to password database breaches and phishing attacks. Strong new authentication mechanisms, being worked on for web-wide standards, can replace the password; helping users and applications to secure accounts more effectively. Strong secure authentication will enable users to manage their personal interactions and data privacy, as well as securing commercial data exchange.
The emergence of large networked systems has brought about new challenges to researchers and practitioners alike. While such systems perform well under normal operations, they can exhibit fragility in response to certain disruptions that may lead to catastrophic cascades of failures. This phenomenon, referred to as systemic risk, emphasizes the role of the system interconnection in causing such, possibly rare, events. The flash crash of 2010, the financial crisis of 2008, the New England power outage of 2003, or simply extensive delays in air travel, are just a few of many examples of fragility and systemic risk present in complex interconnected systems. The term fragility is used in this context to highlight the system's closeness to failure. Notions of failure include large amplification of local disturbances (or shocks), instability, or a substantial increase in the probability of extreme events. Cascaded failures, or systemic risk, fit under this umbrella and focus on local failures synchronizing to cause a breakdown in the network. Many abstracted models from transportation, finance, or the power grid fit this framework well. The important issue here is to relate fragility to the size and characteristics of a network for certain types of local interactions. In this talk, we will discuss risk and efficiency in these systems and provide some constructive examples and highlight important research directions.
In recent years, great strides have been made to scale and automate Big Data collection, storage, and processing, but deriving real insight through relational and semantic data analysis still requires time-consuming guesswork and human intuition. Now, novel approaches designed across domains (education, medicine, energy, and others) have helped identify foundational issues in general data analysis, providing the basis for developing a “Data Science Machine,” an automated system for generating predictive models from raw data.
Computer Science rests on an unphysical division between the description of a computation and its implementation. Many issues in computing, including its scalability, efficiency, and security, arise at that interface. I will introduce alternative approaches to aligning the representations of hardware and software, and explore some of the social and economic implications of programming atoms as well as bits.
This talk will begin by looking at predictions from the past about the future of work. Then it will focus on a promising new way to predict how work will be organized in the future: by thinking about how to create more intelligent organizations. Examples to be described include: studies of why some groups are smarter than others, studies of how people and machines together can do better than either alone, and ways to harness the collective intelligence of thousands of people to solve complex problems like climate change.
Imagine if our environment helped us to be more productive, to learn the most from our social interactions, and to inspire us when we felt stuck. The Responsive Environments Group at the MIT Media Lab develops systems that connect ubiquitous sensors and computers through the IoT, allowing us to analyze and control networked devices and make them work in concert. The resulting interface can be considered an effective extension of the human nervous system, leveraging approaches including wearable electronics, sensor networks, and the discovery of latent dimensions in user preference for the design of intuitive lighting interfaces.
Uplevel Security Enterprises invest millions in preventing and detecting cyber attacks but have limited technology capabilities for responding to attacks. Their current security infrastructure applies sophisticated algorithms to network and endpoint data to identify potentially malicious activity. However, the output of these appliances is an alert - an isolated data point without any surrounding context. Incident responders need to go through a manual, time-consuming process to reconstruct the original context and understand how an alert relates to their historical data and external threat intelligence. Uplevel automates incident response by applying graph theory to the technical artifacts of cyberattacks. This allows organizations to reduce response times and increase the efficiency of their analysts, thereby reducing their overall exposure risk.
Yaxa According to 2015 Verizon Data Breach Investigative Report (DBIR), 95% of the breaches happen due to stolen user credentials. When legitimate user’s login credentials gets stolen, imposters (malicious outsiders) using these stolen credentials pose as insiders. Yaxa’s in-line software appliance protects enterprise’s critical data center assets and web applications in real-time from such insider threats. Yaxa’s unique user data access fingerprint approach not only detects such bad users but also takes automatic enforcement action as per configured IT policy instead of generating an alert. Real-time imposter and malicious user detection, coupled with automatic enforcement results in huge savings in investigation time and cost reduction while improving risk posture for an organization.
Recent advances in artificial intelligence and robotics are reshaping our thinking about the likely trajectory of occupational change and employment growth. Understanding the evolving relationship between computer capability and human skill demands requires confronting historical thinking about machine displacement of human labor and considering the contemporary incarnation of this displacement: the simultaneous growth of high-education, high-wage and low-education, low-wages jobs.