Using Machine Analysis to deep-dive into technologies that have the potential to reshape multiple industries.
Exploring Disruptive Technologies with AI
About This Report
AMPLYFI believes that Market Intelligence is most valued when there is change – so that is where we spend our time testing our tools. Technology disruption has been the major market changing force for the last hundred years and this does not look set to abate.
In this piece we decided to test the AMPLYFI AI more thoroughly, working with an experienced analyst, we asked our machine to unearth technologies that are not well known, but have the potential to mimic the impact that Generative AI is currently having on organisations (read our Generative AI report for more on this). To summarise the brief, we asked our Human/Machine Analyst team to identify:
8 Technologies you may have never heard of, that are coming to change your industry.
To watch our senior analyst talk through her experiences of using AMPLYFI AI to create this in-depth report, please click here.
Key Research Stats
”“Any sufficiently advanced technology is indistinguishable from magic”
- Arthur C Clarke
Emerging Solutions
Many Commercially Available Solutions Already in Place.
Human Exoskeletons
Superhumans come to life
Overview
Definition
Human exoskeletons, once a staple of science fiction, have transitioned into a burgeoning field of technology with the potential to revolutionise the way we live, work, and rehabilitate from injuries. Human exoskeletons are wearable devices designed to enhance the physical capabilities of their users. They achieve this by providing additional strength, endurance, and support to the human body, thereby reducing the strain on muscles and joints.
Key Concepts
There are three main exoskeleton concepts currently:
- Passive – Passive exoskeletons use no active power source and rely on mechanical elements like springs and dampers to augment human movements. An example is the LifeSuit2, which uses levers and pulleys to enhance movement without added power.
- Powered – Active exoskeletons are powered by motors and batteries, providing more significant strength and endurance enhancements. They can assist with more complex and demanding tasks, such as lifting heavy objects or aiding individuals with severe mobility issues to walk. The HAL suit is a notable example, using bio-signals to mirror user movements.
- Soft – Soft exosuits, like those developed by Harvard’s Biodesign Lab and Superflex, represent a newer category that use flexible materials to provide users more comfort. These devices are particularly promising for medical rehabilitation and everyday use.
Historic Development
The concept of augmenting human capabilities through external frameworks can be traced back to the early 1960s with the development of the Man Amplifier at Cornell University’s Aeronautical Lab and the Hardiman Suit by General Electric in collaboration with the U.S. military. Despite their potential, these early prototypes faced technical challenges that hindered their practical application. The idea of powered exoskeletons was further popularised in science fiction, notably in Robert A. Heinlein’s novel “Starship Troopers” and the film “Aliens,” setting the stage for future developments.
Exoskeleton technology has roots in the evolution of mobility aids, from crutches and wheelchairs to modern exoskeleton suits designed for rehabilitation and mobility assistance. Innovations in materials science and power sources have led to lighter, more durable exoskeletons that offer precise control and accuracy, significantly impacting the lives of individuals with mobility issues.
Recent trends in exoskeleton technology include the development of “soft” exoskeletons, which use flexible fabrics and artificial muscles, and the exploration of new control and interaction interfaces to make exoskeleton use more natural and intuitive. These innovations, along with advancements in understanding human intent and compliance control, are pushing the boundaries of what exoskeletons can achieve, offering more efficient, user-friendly devices.
Innovation Drivers
Key Technologies
- Actuation: Utilises fluid, air or electric power to generate movement and support. Fluid is known for providing high power output and smooth movements, making it suitable for heavy-duty applications. Pneumatic actuators are lightweight and can offer a soft, compliant interaction with the user, which is beneficial for certain rehabilitation applications. Electric actuators are highly controllable, efficient, and can be precisely regulated, making them ideal for a wide range of exoskeleton applications.
- Sensors: Integral for monitoring user intentions and the environment, sensors in exoskeletons collect data on position, velocity, force, and user interaction. This information is crucial for the system to adapt and respond to the user’s movements and the surrounding conditions effectively.
- Control Algorithms: The brain of the exoskeleton, control systems process sensor data to make real-time decisions about actuator responses. Advanced algorithms enable the exoskeleton to provide smooth and natural assistance or resistance, tailored to the user’s needs and the task at hand.
- Electrical Stimulation: Enhances the functionality of exoskeletons by stimulating the user’s muscles, providing additional force and support, and potentially aiding in rehabilitation and strength training.
- Epidural Stimulation: Targets the spinal cord to restore movement and function in individuals with motor impairments. When combined with exoskeleton technology, it can significantly improve outcomes in rehabilitation and mobility assistance.
- Brain-Computer Interface (BCI): Allows for direct communication between the user’s brain and the exoskeleton, enabling control of the exoskeleton through thought, particularly groundbreaking for individuals with severe motor impairments.
Key Organisations
Research & Academic
- Beihang University has been mentioned as a contributor to the field, indicating the involvement of academic institutions in exoskeleton research.
- Virginia Tech researchers, supported by a $3 million award from the National Science Foundation, are working on developing new controls and human-machine interfaces for powered exoskeletons.
- Project MARCH at TU Delft focuses on creating powered exoskeletons for individuals with spinal cord injuries, emphasising user co-creation and smart technologies.
Public Sector
- The U.S. military has been involved in the development of exoskeletons through programs like the Tactical Assault Light Operator Suit (TALOS) and collaborations with companies like Lockheed Martin for the Onyx powered exoskeleton.
Private Sector
- Ekso Bionics, Cyberdyne, and Lifeward are among the companies that have developed exoskeletons for medical rehabilitation and industrial applications.
- suitX, a startup based in Berkeley, California, has developed the Phoenix exoskeleton for individuals with severe mobility issues and the MAX workplace exoskeleton for industrial tasks.
- Sarcos Robotics has developed the Guardian XO, a whole-body powered exoskeleton, primarily targeting military applications.
Next Steps
Future Applications
Digitisation
- Movement Energy Harvesting – The potential for energy harvesting from movement underscores the role of exoskeletons in driving advancements in related technology and hardware sectors.
Engineering
- Productivity Boost – Industrial exoskeletons that are designed to support body joints and provide assistive movements increase industrial worker productivity and reduce injury risks. For instance, Ford’s implementation of the EksoVest has significantly eased the strain of repetitive tasks for assembly line workers, leading to a dramatic decline in work-related injuries and an increase in energy levels throughout the day.
- Addressing the Industrial Skills Gap – The looming industrial skills gap, projected to leave 2 million manufacturing jobs unfilled over the next decade due to a shortage of qualified applicants, is exacerbated by an ageing workforce and the younger generation’s disinterest in manufacturing careers. Exoskeletons can make heavy physical jobs less physically demanding and safer, potentially attracting younger generations to the manufacturing sector. Additionally, these devices can extend the working life of an ageing workforce and make certain jobs accessible to a broader range of people, including those with reduced physical abilities.
Health
- Reduced Musculoskeletal Disorders – Low-back exoskeletons target the prevalent issue of work-related musculoskeletal disorders, notably low-back pain, which is a leading cause of work absenteeism in the industrial sector. By providing support and reducing strain during physically demanding tasks, these exoskeletons aim to enhance the health and quality of life of industrial workers.
- Upper Limb Exoskeletons – Wearable upper limb exoskeletons (ULEs) are designed to assist with heavy lifting, thereby reducing muscle fatigue and the risk of musculoskeletal injuries. By providing auxiliary force to the wearer’s upper limbs, ULEs improve operation efficiency and lower the incidence of work-related injuries.
- FES Functional electrical stimulation (FES) – Exoskeletons utilising FES assist individuals with mobility impairments by stimulating muscles to aid movement. The Synapsuit project, for instance, aims to reduce muscle fatigue associated with continuous electrical stimulation, a common barrier to the everyday use of FES systems.
Current Barriers
Economic
- The wheelchair presents a formidable market barrier due to its affordability and effectiveness. The industry’s pivot towards healthcare applications, such as gait training, indicates a reassessment of the immediate market potential for personal mobility devices. However early models failed to reduce the metabolic cost of walking and exoskeletons designed for gait rehabilitation initially struggled to demonstrate clinical improvements justifying their cost.
Social
- User acceptance and comfort are crucial for the widespread adoption of exoskeletons. The development of soft exoskeletons (exosuits) aims to address comfort for prolonged wear, but designing devices that are both effective and comfortable remains a challenge.
Technological
- Replicating the complexity of a bipedal gait and ensuring coordination between the device and the wearer are significant challenges. Specifically exoskeletons that accurately mimic human locomotion and can carry heavy loads without compromising comfort or mobility are technically demanding. The lack of in-depth research on the biomechanical interactions between humans and exoskeletons will need to be addressed.
- With exoskeletons becoming part of IoT, cybersecurity becomes a key blocker. Protecting the data transmitted by exoskeletons and ensuring the secure operation of these devices is paramount to adoption in any industry.
- The use of exoskeletons in various sectors generates a significant amount of data regarding body position, work, and movement. This data needs to be transmitted, processed, and analysed, driving demand for robust telecommunications infrastructure and advanced data analytics solutions.
Synthetic Biology
Playing God with Genetics
Overview
Definition
Synthetic biology represents a fusion of biology and engineering, aiming to design, construct, and modify biological systems in ways that do not exist in nature by designing and synthesising entirely novel gene sequences. This is characterised by the goal to engineer biological systems for specific applications, including the creation of new life forms with predefined functions, mirroring strategies used in the electronics industry for constructing complex objects.
Key Concepts
There are five main types of synthetic biology:
- Bioengineering: Applying engineering principles to create or modify living things.
- Synthetic Genomics: Designing and building new genomes to reprogram cells.
- Protocell Biology: Creating simplified, artificial cells to understand the origins of life.
- Unconventional Molecular Biology: Using new techniques to manipulate biological systems at the molecular level.
- In Silico Synthetic Biology: Utilising computer models to design and predict biological behaviour.
Historic Development
Born from the discovery of DNA’s structure in 1953, synthetic biology has blossomed from basic genetic engineering to creating entirely new life forms. Beyond inserting artificial DNA, it now explores merging biological and non-biological systems.
Standardising genetic parts, building artificial networks, and engineering bacteria for targeted therapies are key achievements. Creating the first synthetic cell and anti-malarial drugs like artemisinin showcase its medical and environmental potential. Machine learning is further revolutionising biological design.
The early 21st century solidified synthetic biology as a distinct field with many significant achievements. Milestones like building synthetic genomes and cells, along with the first international conference (“Synthetic Biology 1.0”) and dedicated research institutions cemented its status.
Synthetic biology’s interdisciplinary nature draws from computer science, engineering, mathematics, and more. This broad reach underscores its potential to solve complex problems by harnessing knowledge from diverse fields. With established applications and ongoing advancements, synthetic biology promises significant contributions in the years to come.
Innovation Drivers
Key Technologies
- Genetic Engineering and CRISPR-Cas9: Fundamental to synthetic biology, allowing precise modifications to DNA.
- Synthetic Genomes: Construction of entirely synthetic genomes for creating novel organisms.
- Biofabrication: Using living cells to produce materials and structures, such as biodegradable alternatives to plastics.
- Metabolic Engineering and Directed Evolution: Enabling the development of chemicals and materials by engineered cells or enzymes.
- Automated Strain Engineering and Metagenomic Discovery: optimising genetic strains for industrial applications.
- Gene Circuit Design and Genome Editing (TALENs): Key technologies for developing products like high-oleic oil from genome-edited soybeans.
Key Organisations
Research & Academic
- University of California, Berkeley: A leading institution in synthetic biology, contributing foundational technologies
- Imperial College London: Home to the Centre for Synthetic Biology and Innovation (CSynBI), focusing on integrating social sciences
- University of Edinburgh: Hosted a significant meeting on future trends in synthetic biology, indicating its role in advancing the field.
Public Sector
- UK Synthetic Biology Leadership Council: Coordinates activities and strategic direction for synthetic biology in the UK.
- U.S. National Institute of Standards and Technology (NIST): Launched the Synthetic Biology Standards Consortium to establish industry-wide standards.
- Defense Advanced Research Projects Agency (DARPA): Aims to mass-produce engineered organisms through its Living Foundries program.
Private Sector
- SynBioVen: A £20-million venture fund established to support synthetic biology startups in the UK.
- Amyris Technologies: Utilises synthetic biology for the development of parts-based organisms.
- Impossible Foods: Engineered yeast to produce soy leghemoglobin for its Impossible Burger.
- iGEM (International Genetically Engineered Machine): Fosters innovation and education in synthetic biology through global competitions.
Next Steps
Future Applications
Digitisation
- Bio Semiconductors – In the technology and semiconductor sectors, synthetic biology enables the development of bio-based computing systems and semiconductors. The integration of omics data, hardware, and software resources in synthetic systems can lead to more efficient genetic circuits, enabling more biologically inspired computing systems and improving semiconductor manufacturing processes.
- DNA Based Data Storage – One of the most groundbreaking applications of synthetic biology in the enterprise technology sector is in data storage. DNA-based data storage promises unparalleled density and longevity, potentially revolutionising how data is stored and accessed in the future.
Security
- Adaptive Information Security – Synthetic biology’s ability to rank enzymatic pathways and find minimal source pathways for synthesising target compounds can lead to the development of bio-based cyber security measures and hardware components, such as self-repairing circuits or biologically encrypted data storage.
Engineering
- Living Materials -The development of living materials capable of self-repair could significantly reduce maintenance costs and increase longevity. This could also lead to enhanced properties such as lighter weight, increased strength, and improved thermal resistance or customisability.
- Bio Based Chemicals – The development of bio-based chemicals and materials, reducing dependency on petrochemicals and associated environmental impacts. It enables the biosynthesis of complex chemicals and materials that are difficult or expensive to produce through traditional chemical processes.
Energy
- Biofuels – Biofuels produced through synthetic biology can contribute to the development of microbial fuel cells or bio-batteries, providing innovative energy storage solutions.
Health
- Drug Discovery – Synthetic biology is transforming the way drugs are discovered, developed, and produced. It facilitates a deeper understanding of disease mechanisms, aids in the identification of novel drug targets, and enables the design of cost-effective microbial production processes for complex natural products.
- Genetic Treatments – In a wider healthcare context, synthetic biology could enable treatments without the use of drugs, for example overcoming limitations in cancer treatment through the creation of custom-designed DNA for targeting specific cancer cells, exemplified in CAR-T therapy.
Agricultural
- Resilient Crops – Synthetic biology can significantly impact agriculture by enhancing crop yield, nutritional value, and resistance to pests and diseases. Bioengineered plants can be designed to adapt to adverse environmental conditions, reducing the need for chemical inputs and contributing to sustainable agricultural practices.
- Synthetic Crops – Synthetic biology addresses the over-harvesting of natural resources by creating synthetic alternatives to products like insulin, vanillin, and squalane. This not only reduces environmental impact but also ensures a sustainable supply of these products.
Climate
- Environmentally Restorative Plants – Engineering plants and microorganisms to require less fertiliser, digest plastics, and break down toxic chemicals. This offers more efficient and less harmful alternatives to current environmental cleanup methods, addressing significant limitations in environmental remediation.
Current Barriers
Social
- Misconceptions and a lack of understanding about synthetic biology among the general public can lead to resistance or opposition. Ethical concerns, such as the perceived unnaturalness of synthetic biology applications and fears about their safety, contribute to public scepticism.
- The rising market demand for sustainable products, driven by societal awareness of health, well-being, and environmental issues, sets high expectations for synthetic biology products. Consumers and industries demand sustainability, pushing for the replacement of petrochemical-derived products with bio-derived alternatives. This demand, while a driver for synthetic biology, also raises the bar for what these technologies must achieve to be commercially viable.
Technological
- Technical hurdles, including the predictability and robustness of synthetic biological systems, pose significant barriers to the adoption of synthetic biology. Achieving a level of design precision that ensures synthetic organisms behave as intended in diverse environmental conditions will be challenging.
- The transition from academic to commercial settings also presents challenges, as it requires scaling up lab-scale developments to industrial applications, a process fraught with technical and economic difficulties.
- The lack of standardised methods, materials, and documentation in synthetic biology hinders the field’s development and adoption. This absence of industry-wide standards makes it difficult for researchers and companies to share information and compare results.
- Infrastructure gaps, particularly in the context of open development, impede cooperative action and collective outcomes.
Legal
- One of the primary barriers to the adoption of synthetic biology is the complex regulatory and ethical landscape. The rapid pace of innovation in synthetic biology often outstrips existing regulatory frameworks, leading to a regulatory lag that can hinder the development and commercialisation of synthetic biology applications.
- Ethical concerns, such as the potential for bioterrorism, environmental risks, and moral objections to the creation of synthetic life, further complicate the regulatory environment.
The complexity and vastness of the field make it difficult to keep track of developments and ensure they meet regulatory standards. This challenge is highlighted by the Wilson Center’s initiative to create an inventory tracking the array of synthetic biology products.
Rare Solutions
Limited commercially available products.
Brain Computing Interfaces (BCIs)
The Arrival of Cyborgs
Overview
Definition
Brain-Computing Interfaces (BCIs), also known as Brain-Machine Interfaces (BMIs), represent a revolutionary technology that bridges the human brain and external devices, enabling direct communication pathways between neural activity and computational systems.
Key Concepts
BCIs capture the brain’s electrical signals and translate them into commands that can control external devices or software. This involves several steps, starting with signal measurement, which can be achieved through non-invasive methods, or directly from neurons using implanted (invasive) electrodes. The signals are then decoded using algorithms that interpret the brain’s intentions from these electrical patterns. This decoding process is crucial for translating thoughts into actionable commands for external devices.
Non-invasive BCIs, such as those using EEG, do not require surgery and capture signals from the scalp. In contrast, invasive BCIs involve implanting electrodes directly into the brain, offering a better signal-to-noise ratio and more detailed recordings of brain activity. Each approach has its advantages and challenges, with non-invasive methods being safer and less costly, while invasive methods provide more precise control and richer data.
Artificial Intelligence also plays a pivotal role in enhancing the functionality of BCIs. AI algorithms are trained to recognise patterns in the brain’s electrical activity, enabling more accurate and efficient interpretation of user intentions.
Historic Development
The journey of BCIs began with foundational discoveries in the early 20th century. Hans Berger’s invention of the electroencephalogram (EEG) in 1924 marked a significant milestone, providing a non-invasive method to record the brain’s electrical activity. This discovery laid the groundwork for understanding the brain’s electrical patterns and their relationship to various states and disorders.
The 1970s witnessed the first developments aimed at utilising brain waves for computing interfaces, with research expanding to include animal experiments in the 1990s. Notably, in 2000, monkeys were able to control robots through their thoughts, demonstrating the potential of BCIs beyond theoretical applications.
The transition from animal experiments to human applications was marked by Matt Nagle’s groundbreaking achievement in 2004. Nagle, who was paralysed, could control a computer cursor and a prosthetic hand using his thoughts, thanks to an implanted BCI. This event underscored the potential of BCIs to restore lost functions and enhance human capabilities.
Recent years have seen remarkable advancements in BCI technology. UCSF’s Chang Lab created a thought-to-text interface, allowing a person with limited movement and speech capabilities to communicate at a rate of up to 18 words per minute with 93% accuracy. Similarly, Blackrock Neurotech, in collaboration with Stanford University, developed a system that converts imaginary handwriting into text, aiming to assist individuals with severe physical limitations.
Innovation Drivers
Key Technologies
The landscape of BCI technologies is diverse, with both non-invasive and invasive interfaces showing promise.
- Intracortical Wireless BCI: Demonstrated in the BrainGate clinical trial, this technology enables patients to interact with tablets at high accuracy and speed, showcasing the potential for communication and control applications.
- Flexible Electronics: Addressing the challenge of mechanical mismatch, flexible electronics from Harvard University could lead to safer, more stable long-term neural recording.
- Optical Interfaces: Optical interfaces in Brain-Computer Interfaces (BCI) refer to technologies that use light to detect and measure brain activity for communication with external devices. Being developed at the University of California, Berkeley, optical interfaces could revolutionise BCIs by offering solutions to current limitations in scaling and precision.
- Emotiv Epoc: The Emotiv Epoc is a wireless EEG headset designed to measure brain activity. Identified as a leading solution in non-invasive BCI technology, the Emotiv Epoc stands out for its functionality and user comfort for everyday wear.
Key organisations
Research & Academic
- Harvard University: Researchers at Harvard are exploring flexible electronics to address the mechanical mismatch between conventional electronics and soft brain tissue, potentially revolutionising long-term neural recording.
- University of California Berkeley: The development of optical interfaces at Berkeley could offer solutions to scaling, precision, and invasiveness in BCIs.
- University of Southern California (USC): USC’s Ted Berger has demonstrated a computer chip capable of interacting with live cells, a step towards implantable devices for enhancing memory functions.
Private Sector
- Neuralink: Founded by Elon Musk, Neuralink focuses on developing brain implants to enable direct interfacing with computers, with ambitions for human trials pending FDA approval.
- Synchron: Known for its groundbreaking BCI that allows paralysed patients to perform online activities without invasive brain surgery, Synchron represents a significant leap in neural control interfaces.
- Paradromics: This startup is working on advanced BCIs for tasks like steering wheelchairs through thought, emphasising the brain’s role as a “data organ”.
- Braingrade: Aiming to develop BCIs for enhancing memory function, Braingrade is focused on building the data infrastructure necessary for AI analysis of brain data.
- Meta (Facebook): Initially venturing into BCI technology for text creation through thought, Meta has since shifted focus but continues to explore AI applications that could intersect with BCIs.
Next Steps
Future Applications
Digitisation
- Faster Interfaces – Faster and more efficient ways of communicating with machines. This includes controlling devices, typing messages, or flying drones through thought alone, bypassing traditional physical interfaces like keyboards and mice.
- Enhanced Cognitive Capabilities – BCIs can augment human cognitive abilities, enabling consultants to access and process vast amounts of data at speeds beyond human capability alone. This could lead to more informed decision-making and innovative problem-solving approaches.
- Brain Based Data Storage – The development of BCIs requires the creation of sophisticated data infrastructure tools to collect, parse, and analyse brain data. This focus on data infrastructure could spur innovation in data management and analysis techniques, providing new methods for handling large datasets and extracting valuable insights.
Security
- Brain Authentication – The integration of BCI and neural networks presents a promising advancement in enhancing data security and authentication processes. By recognising EEG signal patterns, the authentication model can become self-learning, thereby improving its efficiency over time. This technology has significant implications for the enterprise technology sector, especially in fields related to data security, human-computer interaction, interface computing, and cryptographic protocols, providing a more secure and user-friendly method of authentication.
- Transaction Authentication – Additionally, BCIs could improve security measures in financial transactions by utilising biometric data, such as brainwave patterns, to authenticate user identities, reducing the risk of fraud.
Engineering
- Remote Control Safety – One of the most significant benefits of BCIs in the heavy industry sector is the potential to improve safety and efficiency. By enabling workers to control machinery through thought alone, BCIs can reduce the need for physical interaction with dangerous equipment, thereby enhancing workplace safety.
- Immersive Training – BCIs can also be utilised in training simulations, providing a more immersive and interactive learning experience for workers. This could lead to more effective training programs, ensuring that workers are better prepared for their roles and can perform their tasks with greater precision and safety.
Health
- Brain Modulation – BCIs have shown significant promise in assisting individuals with motor impairments, enabling communication and control through the modulation of brain activity. This is particularly key for individuals who have lost motor functions due to conditions like ALS or stroke. By bypassing traditional muscular-based systems, BCIs offer a means to regain some control over their surroundings, enhancing their ability to perform daily tasks and interact with technology.
- Advanced Prosthetics – BCIs offer transformative potential for the development of advanced prosthetics and medical devices, contributing to the bioelectronic medicine field. For example, BCIs can restore lost functions in patients, such as controlling robotic arms or generating synthetic speech with thoughts alone.
Current Barriers
Economic
- The specialised and expensive nature of BCI systems, coupled with the need for specialised neurosurgery for installation, makes them financially prohibitive for many potential users. High costs could prevent BCIs from reaching those who could benefit most from them, which could limit investment and development.
- A lack of clear metrics to assess BCI system performance and standardised reporting hinders scientific progress in the field. Publication bias towards positive results prevents the research community from learning from failures and errors.
Social
- Questions are emerging around the broader societal implications of the ability to purchase improved cognitive abilities – and the potential that this creates a 2-class system and schism in society.
Technological
- The research community’s focus has largely been on command decoding, neglecting other crucial aspects and barriers of BCI design such as feedback, human factors, and learning strategies. Little emphasis has been placed on incorporating user-requirement aspects into BCI design.
- Scalability, portability, electrode stability, and information transfer rates also pose significant challenges, compounded by the need for these technologies to be safe, effective, and reliable over long periods.
- For non-invasive solutions, the dynamic and noisy environments in which end-users operate can drastically alter EEG signals, leading to potential failures of lab-based BCIs in real-life contexts.
- The precision of BCIs, while improving, remains limited by the need for extensive training and the ability to recognise only a predefined set of words or commands. Despite efforts to improve command decoding, correct mental command decoding rates remain relatively low.
- The need for regular adjustments to BCI systems to account for changes in brain activity or user preferences introduces complexity in long-term support. Hardware may require updates or replacements over time, complicating the path to widespread adoption.
Legal
- The novel nature of BCIs means that regulatory frameworks are still evolving. Questions about long-term support, including who is responsible for system adjustments, hardware updates, or replacements, remain unanswered. The intimate nature of BCIs also raises significant privacy and data ownership issues that need to be addressed.
- Identifying suitable candidates for use remains challenging. Ethical dilemmas include conducting treatments without informed consent and defining the amount of information that can be retrieved from a patient.
- Many BCIs require invasive procedures to implant electrodes directly onto the brain’s surface. Concerns about the long-term functionality of these implants, coupled with uncertainties about their durability and the need for potential replacements or updates, add complexity to their adoption.
Swarm Robots
Unstoppable Machine Armies
Overview
Definition
Swarm robotics is defined as the study and application of multi-robot systems characterised by a large number of mostly simple physical robots that coordinate through decentralised control and local communication. Unlike traditional robotics, which often relies on a single, complex robot to perform tasks, swarm robotics focuses on achieving collective behaviour that emerges from the interactions among individual robots and between the robots and their environment. This approach is inspired by biological studies of natural swarms, where simple individual rules can generate complex group behaviours.
Key Concepts
Swarm robots operate based on principles observed in nature, where collective behaviour in species like ants, bees, and termites is leveraged to achieve complex tasks without central management or higher-order intelligence. These robots are designed to be simple yet capable of performing tasks such as navigation, search, exploration, and more, in a coordinated manner without central management. Their simplicity makes them economically feasible to construct and flexible in their tasks and roles.
This decentralised approach allows the robots to operate independently, making decisions based on local interactions with their immediate neighbours. Algorithms play a fundamental role in enabling these robots to discern relevant information from distractions in their environment, avoid collisions, and work simultaneously on tasks.
Historic Development
The concept of swarm robotics is deeply rooted in the study of natural systems. Researchers have long been fascinated by the ability of ants, bees, termites, and other social insects to accomplish complex tasks through collective effort without central control. This observation led to the exploration of how these principles could be applied to robotics, aiming to create robotic systems that mimic the efficiency, flexibility, and robustness of natural swarms.
Initial Concepts and Inspiration: The foundational ideas of swarm robotics were inspired by the observation of social insects and other biological systems. Researchers noted how simple organisms, following simple rules, could achieve complex behaviours and solve problems cooperatively without central control.
One of the earliest examples of a machine exhibiting swarm-like behaviour is the Machina Speculatrix, created in 1953. This simple robot demonstrated basic principles of swarm behaviour, such as movement based on light intensity and battery level, laying the groundwork for the complex swarm robotics systems developed in the 21st century.
The field has been driven by technological advancements in miniaturisation, sensors, and computing power, allowing for the development of small, autonomous robots capable of cooperating to achieve complex tasks.
The last decade has witnessed significant advancements in swarm robotics. The focus has shifted towards scalability, robustness, and real-world applications. Despite these advancements, swarm robotic applications in industry are still rare, with many projects preferring centralised control systems. However, the interest in decentralised approaches is growing, driven by the potential benefits of scalability, flexibility, and fault tolerance inherent in swarm robotic systems.
Innovation Drivers
Key technologies
- Zigbee: Effective communication and coordination among robots are crucial for the success of swarm robotics. Technologies such as Zigbee have been proposed to enhance inter-robot communication, ensuring proper coordination and routing within swarms. Advanced software and virtual reality interfaces, as demonstrated by Raytheon, enable operators to monitor and manage swarm activities efficiently.
- Simulation: Simulation tools play a vital role in testing and refining the algorithms governing swarm robot behaviour. These tools allow researchers to understand how swarm robots interact and perform tasks in controlled virtual environments, facilitating the development of self-organising systems and distributed computing principles.
- Modular Systems: The development of hardware and modular systems, such as the Kilobots and Intel Aeros, has advanced the field of swarm robotics. These systems are designed for economic feasibility, flexibility in task assignment, and robustness, enabling large groups of robots to perform collective tasks efficiently.
- Emergent Behaviour – Swarm robotics systems are designed to exhibit emergent behaviour, where the collective actions of individual robots result in complex, problem-solving behaviour without explicit programming for the task.
- Foraging: Swarm robots are well-suited for a wide range of applications, especially those categorised under ‘foraging‘ tasks, including military reconnaissance, mining, search and rescue, space exploration, construction, and medical applications. The inherent randomness and the ability to self-organise make swarm robots ideal for environments and tasks too complex, dangerous, or unpredictable for humans or traditional robotic systems.
Key organisations
Research & Academic
- Natural Robotics Lab at Sheffield University: have developed robots capable of self-organising and transporting large objects. These Robots could also perform tasks within the human body, and in dynamic environments like the International Space Station, where re-configurable robots could be invaluable.
- MIT: The development of “M-blocks” by MIT researchers, capable of self-assembling into different shapes, highlights the versatility and potential of swarm robotics.
- Harvard University: the Kilobot is a microrobot developed by researchers at Harvard University. These tiny robots can assemble into different shapes and perform collective tasks.
- Carnegie Mellon University: Claytronics is a future technology concept involving programmable cubes that can self-assemble and reconfigure to form 3D displays or even functional machines showing collective behaviour and modularity.
- Monash University: the Monash Swarm Robotics Laboratory is focused on leveraging swarms to enhance search and rescue operations.
- University of Southern California: SuperBot project focusing on collaborative tasks
- Swarm Robotics Lab (SRL): at the National Center of Robotics and Automation (NCRA), Taxila, Pakistan, contributes significantly to both theoretical and practical advancements in swarm robotics.
Public Sector
- DARPA: In the United States, defense agencies have shown a strategic interest in swarm robotics for military applications.
Private Sector
- Raytheon: showcased active drone swarm operations as part of the DARPA programmes
- Samsung: Samsung has shown interest in robotics research and development, hinting at potential exploration of swarm robotics for applications like search and rescue or product delivery.
- Geek+ and Unbox: have successfully implemented swarm Autonomous Mobile Robots (AMRs) in logistics and warehouse management.
Next Steps
Future Applications
Digitisation
- Rugged Telecommunications – In telecommunications, swarm robots could be deployed to build and maintain infrastructure, especially in hard-to-reach or dangerous environments. Their ability to autonomously construct complex structures could be utilised for erecting telecommunications towers or laying down cables in challenging terrains, improving connectivity in remote areas without risking human lives.
Security
- Swarm Reconnaissance – In the aerospace and defence sector, swarm robotics can significantly enhance surveillance, reconnaissance, and targeted operations. The distributed nature of swarm systems ensures operations can continue even if a unit fails, thereby enhancing fault tolerance and robustness in critical missions.
Engineering
- Fault Tolerance – In the technology and semiconductor industries, swarm robots can revolutionise manufacturing processes by enhancing automation and efficiency. Their ability to work in parallel and their fault-tolerant nature mean they can efficiently handle tasks such as assembling delicate semiconductor components.
- Warehouse Automation – Swarm robots ability to work collaboratively can significantly enhance productivity and flexibility in manufacturing processes, from assembly lines to logistics and inventory management. For instance, the Chinese startup Geek+ has implemented over 15,000 Autonomous Mobile Robots (AMRs) in warehouses, demonstrating the efficiency of swarm robotics in repetitive tasks.
Energy
- Renewables Maintenance – In the energy sector, swarm robotics can contribute to the maintenance and monitoring of vast energy infrastructure networks. Their application in renewable energy production, such as the maintenance of solar farms or wind turbines, can optimise operations and enhance safety in hazardous environments.
Health
- Targeted Therapies – The medical sector stands to benefit significantly from the development of nanorobot swarms for drug delivery and precision treatments. The potential for these swarms to be injected into patients for targeted therapy or to replace certain surgeries altogether by the 2050s indicates a transformative shift in medical procedures and patient care. Swarm robots could perform minimally invasive surgeries with higher precision and flexibility than human surgeons or single robotic systems, revolutionising surgeries and patient care.
Agricultural
- Reducing Pesticides – Projects like the one led by Dr. Kiju Lee at Texas A&M University aim to enhance smart agriculture through the use of unmanned ground and aerial robots for collaborative tasks such as optimising water and fertiliser use and reducing pesticide application.
Climate
- Environmental cleanup – “Coral bot” swarms are being explored for ocean habitat restoration, or in monitoring and mitigating environmental issues like oil spills.
Current Barriers
Economic
- The unique features that make swarm robotics appealing for future applications also present significant barriers to their transition from academic research to scalable industrial solutions. Scalable applications of swarm robotics are still far from being realised, indicating a gap between theoretical research and practical, deployable solutions.
- The need for new business models, especially in the context of a Machine-to-Machine (M2M) economy, represents another significant barrier. For swarm robotics to be commercially viable and widely adopted, innovative business models that cater to the unique aspects of swarm robotics need to be developed.
Social
- The introduction of swarm robotics into various industries could lead to shifts in the labour market, necessitating new skills for designing, building, and deploying robots. While this technology promises to create new job categories, it also raises concerns about job displacement and the need for workforce retraining.
Technological
- The development of swarm robots requires advancements in computer chips, motors, actuators, materials, sensors, and miniaturisation. Although progress has been made, the technology must continue to evolve to meet increasingly complex demands.
- A significant barrier is the complexity of the mathematics involved in enabling swarm robots to operate effectively. Developing algorithms that allow robots to discern real information from distractions in dynamic environments is described as “nontrivial” and requires a deep understanding of both robotics and natural swarm behaviours.
- A significant portion of research in swarm robotics is focused on the analysis of collective behaviours. Understanding and designing the desired, or removing the undesired, collective behaviour is fundamental to the success of swarm robotics – but is as of yet not well understood. Additionally, the adaptive emergent behaviour of swarm robotics, while beneficial for flexibility and efficiency, also presents a unique security challenge, as it could be maliciously modified by an intruder.
- Swarm robots employ different types of communication channels, which can vary widely in terms of security vulnerabilities. The diversity in communication methods necessitates a comprehensive approach to securing these channels against potential threats, which can be complex and challenging.
4D Printing
Time Travelling Components
Overview
Definition
4D printing is an advanced form of 3D printing that introduces time as the fourth dimension, allowing 3D printed objects to change shape in response to environmental stimuli such as light, heat, electricity, and magnetic fields. This innovative technology enables the creation of objects that can dynamically alter their form without the need for electromechanical parts, relying instead on the material’s inherent ability to transform in response to specific stimuli.
Key Concepts
At the heart of 4D printing are smart materials, such as shape memory polymers and alloys, which are engineered to respond to specific environmental triggers like temperature, moisture, light, or magnetic fields. These materials can be programmed to undergo transformations, including folding, bending, expanding, or contracting.
The transformative ability of 4D printed objects is driven by their stimuli-responsive behaviour. This behaviour is a result of the inherent properties of the smart materials used in the printing process. For instance, materials developed by researchers at Rutgers University-New Brunswick can vary in stiffness, enabling them to change shape in response to temperature changes.
Designing for 4D printing involves a sophisticated understanding of material properties and their interaction with stimuli. Computational modelling plays an important role in predicting and controlling the transformation of printed objects. This process requires a multidisciplinary approach, combining insights from fields such as computer science, engineering, mathematics, and materials science.
Historic Development
The evolution of printing technologies has been remarkable, from the inception of 2D printing to the revolutionary development of 3D printing. However, the advent of 4D printing has introduced a new dimension to additive manufacturing, incorporating the element of time into the creation of objects.
The transition from 3D to 4D printing represents a significant advancement in additive manufacturing, driven by improvements in computer-aided design (CAD), additive manufacturing processes, and material science engineering. The development of smart materials, capable of responding to external stimuli, has been pivotal in this evolution, enabling the creation of objects that can change over time.
The concept of 4D printing was first introduced by Skylar Tibbits in collaboration with Stratasys and Autodesk in 2013, marking a significant leap in additive manufacturing. This partnership embarked on a series of experiments aimed at proving the feasibility of 4D printing technology, utilising various materials such as shape memory polymers, water-absorbing materials, and hydrogels.
Recent progress in 4D printing includes advancements in additive manufacturing technologies specifically for 4D printing, stimulation methods to trigger the transformation of printed objects, the development of suitable materials, and the exploration of potential applications. Despite these advancements, challenges in material development remain, indicating that the field is still evolving.
Significant research has focused on optimising 3D printing processes for novel resins, developing new biodegradable resins for biomedical applications, creating materials with hierarchical porosity for various applications, and advancing the field of stretchable conductive materials for healthcare monitoring. In the realm of 4D printing, recent work includes the design of sustainable morphing materials, the preparation of functional materials using polyurethanes for biomedicine and electronics, and the engineering of vascular tissue with dynamic properties.
Innovation Drivers
Key Technologies
- Photocontrolled Reversible–Deactivation Radical Polymerisation Techniques: These techniques impart advanced properties to materials, such as real-time adjustments of surface and bulk properties, self-healing attributes, and precise control over nanostructuration and mechanical properties.
- Direct Ink Writing (DIW) and Fused Filament Fabrication (FFF): Primary technologies employed in 4D printing, enabling the creation of self-adjusting stents, artificial muscles, and drug delivery systems in the healthcare sector.
Key organisations
Research & Academic
- Massachusetts Institute of Technology: Pioneer in 4D printing
- University of Texas at Austin (UT Austin): Pioneering 4D bioprinting with living cells for tissue engineering.
- Deakin University: Key contributor to academic discussion
Private Sector
- Stratasys: Played a key role in the early development of 4D printing through collaboration with MIT on self-assembling materials. Their expertise in 3D printing technologies translates to advancements in 4D printing hardware.
- Autodesk: Contributed to the development of 4D printing software tools for designing and simulating the shape-shifting properties of 4D printed materials. Their software plays a crucial role in optimising designs for 4D printing functionalities.
Next Steps
Future Applications
Digitisation
- Resilient Telecommunications – The telecommunications sector could leverage 4D printing to develop infrastructure and devices that are more responsive to environmental conditions, potentially improving signal transmission and device performance. Antennas or other components that adjust their shape or properties in response to temperature changes could maintain optimal performance without manual intervention.
Security
- Anti Tamper Security – For cybersecurity, 4D printing could introduce hardware that dynamically changes its configuration to counter physical tampering or enhance security protocols, adding a new layer of security and adaptability to cyber-physical systems.
Engineering
- Self Assembling Components – 4D printing introduces a new level of efficiency and customisation in manufacturing. It allows for the production of objects that can self-assemble or change shape post-production, reducing assembly time and costs. This is particularly useful for creating parts that are too intricate to be assembled by hand or would benefit from being compact during transport but larger in use.
- Self Healing Hardware – 4D printing potentially leads to more durable and efficient devices. The development of materials such as hydrogels with magnetic nanoparticles and biopolyurethane, could impact design and longevity by creating components that self-assemble or self-repair.
- Adaptive Aerodynamics – In the aerospace and defense sectors, 4D printing offers the ability to create deployable structures that can adapt to different environments, enhancing performance and safety. For instance, aircraft components could change shape in response to atmospheric conditions, improving aerodynamics and fuel efficiency.
- Adaptive Architecture – The development of adaptive infrastructure capable of responding to environmental changes, such as bridges that can adjust to varying loads or temperatures, enhancing safety and longevity.
Energy
- Optimised Renewable Energy – smart materials that optimise energy consumption e.g. exposing a larger surface area to the sun can increase the efficiency of solar panels. Similarly, wind turbine blades that can adapt to wind conditions could significantly enhance power generation.
Health
- Personalised Medicine – 4D printing holds remarkable potential in the medical field, particularly in the development of self-adjusting stents, artificial muscles, and drug delivery applications. These innovations could lead to more personalised and effective treatments, with devices that adapt to the patient’s body or release medication in response to specific triggers.
Agricultural
- Adaptive Irrigation – In agriculture, 4D printing can lead to the development of smart farming tools and structures. For example, irrigation systems that can expand or contract in response to soil moisture levels, optimising water usage. Similarly, structures that can change shape to provide optimal light exposure to plants throughout the day could revolutionise greenhouse technology.
Current Barriers
Economic
- The current market for additive manufacturing is still developing, and it may take several decades for 3D technology to fully mature. The implication is that 4D printing, being an even more advanced concept, faces the challenge of establishing a viable economic market.
- Due to this, entering the 4D printing market requires substantial initial investment, not only in technology development but also in market analysis and strategy formulation – as this is likely to be a long term return.
Technological
- One of the most significant barriers to the adoption of 4D printing is the limitation and availability of suitable smart materials. These materials must reliably respond to environmental stimuli without degrading over time. Additionally, considerations for the end-of-life cycle of such materials are crucial for ensuring sustainability.
- The development of new materials requires innovative approaches to ensure they possess desired properties like self-healing and precise control over nanostructuration. This includes both the software for designing these materials and the hardware for printing and deploying them.
- The design of sustainable morphing materials requires a deep understanding of material science and engineering. The fields of study involved in 4D printing suggest that a specialised knowledge base and skill set are required to develop, understand, and implement 4D printing technologies.
- Improving production speed and feature resolution without significantly increasing costs is a significant hurdle. Novel printing technologies are still in the developmental phase, requiring further understanding of factors like molecular diffusion length scales and temperature gradients.
Quantum Cryptography
Unbreakable Codes
Overview
Definition
Quantum cryptography is the science that utilises quantum mechanical properties to perform cryptographic tasks, aiming to enable more secure communication methods than those provided by traditional cryptography. This field seeks to develop encryption methods that are impervious to attacks by future algorithms, leveraging the immutable laws of quantum mechanics to encrypt and transmit data securely.
Key Concepts
Quantum cryptography, specifically through Quantum Key Distribution (QKD), operates by utilising a series of photons to transmit a secret, random sequence, which serves as the key for secure communication between two parties. These particles of light are sent over a fiber optic wire, with each photon representing information.
When these photons are transmitted from one location to another, any attempt at eavesdropping or intercepting the key alters the quantum state of the photons. This change can be detected by comparing measurements taken at both ends of the transmission. If discrepancies are found, it indicates that the key has been compromised, and a new key can be generated until a secure transmission is achieved. This method leverages the unique behaviour of quantum mechanics, where observing or measuring a quantum system inevitably changes its state, to provide a theoretically unbreakable form of secure communication.
Historic Development
Cryptography has been the cornerstone of secure communication for thousands of years, evolving significantly with the advent of modern computing. The practice, which dates back to ancient civilisations, has seen a dramatic transformation in the last century with the rise of digital technologies and the internet.
Quantum cryptography, also known as quantum encryption, emerged as a response to the vulnerabilities of classical encryption methods and against the potential capabilities of quantum computing. The foundational principles were laid out in a seminal paper by Bennett and Brassard in 1984, marking the inception of quantum cryptography. This paper introduced a novel approach to secure communication that leverages the principles of quantum mechanics, specifically through the distribution of a secret key through single photon transmissions.
Since its inception, quantum cryptography has seen remarkable advancements in its practical implementation. Notable achievements include the successful exchange of quantum keys over significant distances using optical fibers and the demonstration of daylight free-space QKD over atmospheric ranges.
The future of quantum cryptography looks promising, with researchers proposing innovative ways to integrate it into network operations and develop long-distance quantum teleportation architectures. These advancements hint at the possibility of establishing the world’s first quantum network, revolutionising secure communication. Quantum cryptography stands at the forefront of securing information against the evolving threats in cybersecurity, with ongoing research and commercial efforts aimed at overcoming existing challenges to its broader implementation.
Innovation Drivers
Key technologies
- Quantum Key Distribution (QKD): QKD is the most developed and widely recognised technology in quantum cryptography. It uses the principles of quantum physics to securely distribute cryptographic keys, making any eavesdropping attempt detectable. Ground-based projects and satellite technology are being employed to expand the reach of QKD, aiming for global secure communication.
- Post-Quantum Cryptography (PQC): PQC refers to cryptographic algorithms designed to be secure against both classical and quantum computing attacks. NIST has been focused on vetting and standardising PQC algorithms, highlighting the importance of developing quantum-resistant cryptographic standards.
- Satellite-based Quantum Cryptography: The use of satellites for transmitting quantum keys represents a significant advancement in quantum cryptography. This technology aims to establish secure keys across continents, leveraging the unique properties of quantum mechanics for global secure communication.
Key organisations
Research & Academic
- The University of York is influencing policy decisions alongside wired QKD research
- The Institute for Quantum Computing in Waterloo, Canada, is exploring satellite-based quantum cryptography.
Public Sector
- The National Institute of Standards and Technology (NIST) and the Department of Defense, led by the National Security Agency (NSA), are key U.S. entities focusing on quantum-safe technologies and PQC.
- China’s expansion of its QKD network infrastructure, including the deployment of quantum satellites, underscores its commitment to advancing quantum cryptography.
Private Sector
- ID Quantique: specialises in the secure distribution of keys generated by Quantum Random Number Generators (QRNG).
- Battelle: Battelle has developed a test bed specifically for QKD. This platform serves as a critical infrastructure for experimenting with and refining QKD technologies
- QuSecure is noted for its innovation in quantum resilient cybersecurity.
Next Steps
Future Applications
Security
- Future Proof Cryptography – With the advent of quantum computing, traditional encryption methods are becoming increasingly vulnerable. Quantum cryptography introduces a new paradigm for secure communications that is theoretically immune to many vulnerabilities plaguing traditional cryptographic methods. This is particularly relevant as cyber threats become more sophisticated, requiring more robust defenses to protect critical infrastructure and sensitive data.
- IoT Security – For hardware manufacturers, integrating quantum cryptography can enhance the security of devices, particularly those connected to the Internet of Things (IoT). By securing the key exchange process and ensuring the integrity of data transmission, quantum cryptography can prevent unauthorised access and tampering, thereby protecting both the devices and the data they handle.
- Secure Communications – For the telecommunications sector, quantum cryptography offers a method to secure communications over potentially insecure networks. The implementation of QKD can ensure that any communication, whether voice, data, or video, is securely encrypted and safe from eavesdropping or interception. This is particularly important in an era where digital communication is ubiquitous, and the integrity of transmitted information is critical.
Current Barriers
Economic
- The deployment of quantum cryptography requires significant infrastructure investment and faces implementation challenges. Quantum networks, essential for quantum cryptography, require the ability to repeat signals billions of times with billions of photons, necessitating new hardware and software technologies. Additionally, the compatibility of quantum protocols with standard telecom fiber infrastructure and the need for hardware innovations are critical for commercial viability.
- The development and deployment of quantum cryptographic systems entail significant financial investments and resource allocations. The cost of specialised hardware, such as quantum random number generators, and the need for a sophisticated infrastructure can be prohibitive for many organisations. Additionally, the ongoing research and development efforts to address the technological and practical limitations of quantum cryptography require substantial funding and expertise.
- A lack of awareness or understanding among organisations, governments, and the public about the quantum computing threat and the necessity of adopting quantum-safe technologies can hinder the adoption of quantum cryptography. The complexity of quantum mechanics and the nascent state of quantum cryptographic technologies contribute to this challenge.
Technological
- One of the primary barriers to the route to market for quantum cryptography is its inherent technical complexity and novelty. QKD and its more secure variant, Device-Independent QKD (DIQKD), promise theoretically unhackable security solutions. However, the experimental realisation of these technologies indicates a high level of technical complexity and a novelty that complicates their development and implementation. The ongoing certification process by the National Institute of Standards and Technology (NIST) for post-quantum cryptography further exemplifies the technical hurdles in deploying quantum-secure solutions.
- Integrating quantum cryptography into existing security frameworks and systems presents significant hurdles. The fundamentally different nature of quantum computing and cryptography necessitates substantial changes to current cryptographic practices and infrastructure.
- Quantum cryptography, particularly QKD, faces practical limitations in data transmission over long distances. The absorption or disturbance of photons in fiber optic cables restricts the effective range of quantum communication. Moreover, the bandwidth available for quantum communication is currently much lower than that of conventional telecommunications, limiting its practicality for high-speed data transmission needs.
- Despite the theoretical security advantages of quantum cryptography, practical implementations may exhibit vulnerabilities due to device imperfections and the complexity of ensuring security against realistic attacks.
Neuromorphic Computing
Computers with Human Brains
Overview
Definition
Neuromorphic computing aims to transcend the limitations of traditional computing architectures by emulating the structure and function of the human brain. Using artificial neurons and synapses that communicate with tiny electrical charges, it can significantly reduce power consumption compared to traditional chips.
Key Concepts
Moore’s Law, the observation that the number of transistors on a microchip doubles approximately every two years, is approaching the operational boundaries of silicon-based technologies. This deceleration is attributed to submolecular components encountering thermal noise issues, making further scaling unfeasible. The conventional Von-Neumann architecture of computers, where the separation of memory and processor restricts data transfer leads to performance bottlenecks and faces particular hurdles as AI models and datasets grow.
Neuromorphic computing overcomes these hurdles by integrating memory and processing, enabling event-based processing where components activate only as needed. Relying on artificial neurons and synapses, which communicate using analog electrical charges, these components encode data within analog pulses, with properties such as amplitude and generated time. This architecture allows for highly adaptable systems capable of handling complex problems while reducing power consumption. Neuromorphic computing systems can process information in parallel, significantly increasing computing efficiency and speed leveraging new materials and mechanisms, such as ferroelectric materials.
Historic Development
The concept of neuromorphic computing was first introduced by Carver Mead in the 1980s, inspired by his research on synaptic transmission in the eye’s retina. Mead’s work laid the foundation for viewing transistors as analog devices, capable of mimicking architectures present in the nervous system.
The Human Brain Project (HBP) , running from 2013-2023 played a crucial role in advancing neuromorphic computing by developing new tools for software development on neuromorphic multicore systems. HBP ran two neuromorphic computing systems that implemented different conceptual approaches to mimic the functional behaviour of the human brain within the same energy budget.
IBM’s TrueNorth chip, introduced in 2015, and Intel’s Loihi chip, unveiled in 2017, represent major milestones in the field that demonstrate the potential for neuromorphic computing to achieve real-time processing with minimal power consumption. Additionally, the development of systems like SpiNNaker and BrainScaleS has furthered the ability to simulate billions of neurons, pushing the boundaries of brain-inspired computing.
Recent progress in neuromorphic computing has been facilitated by the advent of two-dimensional (2D) materials, enabling the design and implementation of devices critical for the development of neuromorphic systems. These advances include artificial synapses and neurons utilising resistive-switching-based devices and 2D ferroelectric-based memories. The exploration of new materials, such as ferroelectric materials, aims to achieve complex neuron functions with reduced power consumption.
Innovation Drivers
Key Technologies
- Spiking Neural Networks (SNNs): Represent a significant departure from traditional artificial neural networks, mimicking the way neurons in the human brain communicate.
- In-Sensor Computing Vision Chips: Designed based on neuromorphic architectures to minimise unnecessary data transfer, enabling fast and energy-efficient visual cognitive processing.
- Two-dimensional materials – substances with a thickness of a few nanometres or less. Electrons in these materials are free to move in the two-dimensional plane, but their restricted motion in the third direction is governed by quantum mechanics. Prominent examples include quantum wells and graphene.
- Complementary Metal-Oxide Semiconductor (CMOS) – The integration of memristive devices with CMOS chips has been a significant development in neuromorphic computing.
Key Organisations
Research & Academic
- Tsinghua University: This institution is noted for its contributions to spiking neural networks and in-sensor computing vision chips.
- Fudan University: Focused on the development of optical and optoelectronic neuromorphic devices, leveraging emerging memory technologies.
- Seoul National University: Engaged in the research on synaptic devices and learning algorithms for hardware-based neural networks.
- Polytechnic University of Milan: Affiliated with IBM in the development of neuromorphic computing, focusing on memory device technologies.
- Shanghai Jiao Tong University: Spearheading research in integrated neuromorphic photonics, combining neural networks with photonics.
- University of Massachusetts Amherst: Contributing to overcoming the limitations of AI systems through neuromorphic computing.
Public Sector
- The Human Brain Project – ran from 2013 to 2023 pioneering a new paradigm in brain research, at the interface of computing and technology.
Private Sector
- IBM: Known for the TrueNorth chip, IBM has made significant contributions to neuromorphic computing, including in-memory computing advancements.
- Intel: Intel’s introduction of the Loihi chip and its successor, Loihi 2, marks a significant milestone in neuromorphic computing, optimised for spiking neural network algorithms.
- SpiNNaker (Spiking Neural Network Architecture): Based in Manchester, UK, SpiNNaker’s system simulates large portions of the human brain in real-time, aiding in neuroscience research.
- BrainScaleS: Operating out of Heidelberg, Germany, BrainScaleS is recognised for its analog electronic models of neurons and synapses, running simulations 1,000 times faster than real-time.
Next Steps
Future Applications
Digitisation
- Efficient IoT – Neuromorphic architecture allows for more efficient data handling and processing, particularly beneficial for IoT applications and drones, where rapid, on-the-spot data processing is crucial. Particularly for drones, the adaptability of neuromorphic chips is a boon where requirements can change rapidly.
- Faster Telecoms – Neuromorphic computing could enhance data processing and transmission capabilities in telecommunications. With the technology’s low power consumption and high efficiency, telecommunications networks could become more reliable and faster.
- Complex AI – The complexity of AI models strains Von Neumann systems, limiting their memory and processing capacities. Neuromorphic computing, by mimicking the brain’s neural networks through hardware and software, seeks to address these limitations.
Security
- Robust Cybersecurity- Neuromorphic computing offers the potential for more robust security solutions. The technology’s ability to process information in a manner similar to the human brain could lead to the development of advanced algorithms and systems capable of identifying and responding to cyber threats more effectively.
Engineering
- Predictive Maintenance – Neuromorphic computing can revolutionise the design and control of complex systems in the engineering sector. It could facilitate the development of intelligent monitoring systems that predict maintenance needs, optimise performance in real-time, and innovate design processes through enhanced simulation capabilities. This could lead to more sustainable and efficient infrastructure and machinery, accelerating innovation in fields such as civil, mechanical, and electrical engineering.
- Faster Material Development – In this sector, neuromorphic computing can accelerate the discovery and development of new materials by enhancing the capabilities of computational chemistry and materials science. It can enable the simulation of complex chemical reactions and material behaviours at a fraction of the time and cost of traditional methods, leading to innovations in everything from pharmaceuticals to nanotechnology.
Energy
- Smart Grids – For the energy sector, neuromorphic computing could enable smarter grid management and integration of renewable energy sources by efficiently analysing patterns and predicting demand. It can also improve the efficiency of energy storage systems and optimise distribution networks, potentially leading to significant cost savings and reduced environmental impact.
Agricultural
- Precision Agriculture – neuromorphic computing can contribute to precision farming techniques, enabling better analysis of data from soil sensors, weather information, and satellite images to make informed decisions about planting, watering, and harvesting. This can lead to increased crop yields, reduced resource use, and minimised environmental impact.
Current Barriers
Technological
- Neuromorphic hardware remains far from replicating the full capabilities of the human brain. While neuromorphic chips like TrueNorth draw significantly less power than conventional processors, scaling this efficiency remains difficult.
- The scalability and power efficiency of silicon (Si) CMOS transistor-based circuits, which have been extensively investigated for mimicking biological neurons and synapses, are deemed not suitable for replicating large-scale biological neural networks.
- The complexity of spiking neural networks (SNNs), which are central to neuromorphic computing, presents significant hurdles in terms of simulation, performance evaluation, and the development of efficient training methodologies.
- A lack of measurable outcomes or metrics to showcase the efficiency or effectiveness of neuromorphic computing solutions compared to traditional computing paradigms hinders the field’s progress. The establishment of standardised benchmarks is necessary for evaluating and comparing neuromorphic computing technologies objectively.
- Shifting from well-established computing architectures to neuromorphic computing requires significant changes in system design, programming, and utilisation, posing logistical and technical challenges. The novelty of neuromorphic computing architectures also introduces unique security challenges that must be addressed to ensure the safe and reliable deployment of these technologies.
Smart Dust
A Hidden Sensor Network
Overview
Definition
Smart dust comprises tiny sensors, robots, or other devices capable of detecting environmental factors such as light, temperature, vibration, magnetism, or chemicals. These devices operate wirelessly on a computer network and are distributed across wide areas to perform simple tasks, such as gathering data.
Key Concepts
A Smart Dust network is made of “motes”. Each mote in a smart dust network measures one cubic millimeter or less and is equipped with its own circuitry for integrated sensing, computing, and communication capabilities. These devices communicate wirelessly using radio frequency transceivers and independently gather information to report back to a central hub. A conceptual diagram of a smart dust mote includes several key components: a power system, sensors, an optical transceiver, and an integrated circuit. The power system may comprise a thick-film battery, a solar cell paired with a charge-integrating capacitor for use during periods without light, or a combination of both.
The mote can be equipped with a variety of sensors to measure light, temperature, vibration, magnetic fields, acoustics, and wind shear, depending on its intended application. The ability to 3D print components of these devices as one piece using commercially available 3D printers was a key technological breakthrough.
Historic Development
The concept of smart dust traces back to the early 1990s, with significant developments occurring over the subsequent decades. Initially conceptualised during a workshop at RAND in 1992 and further developed through DARPA ISAT studies in the mid-1990s, smart dust was envisioned primarily for military applications, such as monitoring environments in war-torn areas or tracking enemy movements. Kris Pister, a professor at the University of California, Berkeley, humorously coined the term “smart dust” and played a pivotal role in its development.
More recently, a 2022 research paper, from the University of Washington, presented the idea of tiny light-weight programmable battery-free wireless sensors that can be dispersed in the wind – inspired by Dandelion seeds, with a kilometre dispersal radius.
Innovation Drivers
Key Technologies
- Microelectromechanical systems (MEMS) – the motes of smart dust that include miniaturised sensing, computing, power and communication capabilities
- Neural Dust: Developed by teams at UC Berkeley, neural dust involves implanting smart dust sensors in rats to monitor and control nerve and muscle activity. These sensors have no batteries and rely on ultrasound for power and measurement.
Key Organisations
Research & Academic
- Zurich University of Applied Sciences (ZHAW): Institute for Communication and Information Technology (IKT) explores wireless sensor networks
- University of California, Berkeley: Pioneered Smart Dust research with projects like the Berkeley Smart Dust Mote.
- Keimyung University: are contributing towards a smart dust IoT system.
Next Steps
Future Applications
Digitisation
- Molecular Scale IoT – The development of molecule-scale solar cell infrastructure showcases the potential for smart dust to revolutionise solar energy technology and semiconductor applications. This could extend to the miniaturisation and automation trends in computing, driven by smart dust’s nano-structured silicon sensors and MEMS, enabling more efficient and powerful computing devices.
- Rugged Telecoms – In telecommunications, smart dust could enhance network efficiency and coverage by providing detailed environmental data for optimising network operations. The wireless communication capabilities of MEMS could support the development of new communication protocols and infrastructure, improving connectivity in remote or challenging environments.
Security
- Immutable Surveillance – Smart dust offers extensive surveillance and monitoring capabilities, enhancing both cyber and physical security systems. The ability of MEMS to create large-scale sensor networks and deliver critical data wirelessly is crucial for developing new hardware that can communicate and control smart dust deployments.
Engineering
- Predictive Maintenance – Smart dust can play a pivotal role in infrastructure monitoring within the engineering sector. By deploying these sensors on bridges, roads, and buildings, engineers can gather data on structural integrity, detecting cracks, corrosion, or damages in real-time.
- Workplace Safety – In the chemicals and materials sector, smart dust sensors could monitor environmental conditions during material processing or chemical reactions, ensuring optimal conditions and safety. They could detect the presence of hazardous gases or leaks in real-time, enhancing workplace safety.
Energy
- Smarter Renewables – Smart dust could revolutionise how we monitor and manage energy systems. For instance, it could be used to monitor the health and efficiency of solar panels or wind turbines, detect leaks in pipelines, or optimise the distribution of electricity in smart grids. This would lead to more efficient energy use, reduced waste, and lower operational costs.
Health
- Neural Dust – The medical sector stands to benefit immensely from smart dust technology. Possible applications include diagnostic procedures without the need for invasive surgery and monitoring devices that assist individuals with disabilities. Researchers at UC Berkeley have explored the potential for “neural dust,” an implantable system that could provide feedback on brain functionality. This could revolutionise neurological research and treatment, offering new insights into brain diseases and disorders.
Agricultural
- Smart Agriculture – In agriculture, smart dust technology can monitor soil moisture levels, nutrient content, detect crop diseases, and optimise irrigation and fertilisation practices. This precision agriculture approach boosts crop yields and maximises resource utilisation, contributing to sustainable farming practices and food security. The ability to monitor crops and environmental conditions in real-time allows for more informed decision-making and efficient resource use.
- Smart Food Packaging: Proposed by the Institute of Electrical and Electronics Engineers, future smart dust sensors could be integrated into paper or plastic sensors to detect food freshness and communicate this information via a smartphone app, highlighting its application in consumer goods and food safety.
Climate
- Remote Monitoring: One of the primary limitations that smart dust addresses is the challenge of monitoring vast or inaccessible areas effectively. Traditional monitoring systems often struggle to cover such areas, but smart dust, with its tiny, lightweight sensors that can suspend in the air and disperse over large areas, enables comprehensive environmental data collection even in remote or difficult-to-reach locations.
Current Barriers
Economic
- The high implementation cost of smart dust technology, including the necessary infrastructure, presents a significant financial barrier. This makes the technology inaccessible to many, particularly small and medium-sized enterprises (SMEs), and could slow its adoption until costs decrease. The economic barrier might not just be the cost of the sensors themselves but also the infrastructure required to integrate them into existing systems.
Technological
- Smart dust involves sophisticated technologies such as Microelectromechanical Sensors (MEMS), posing significant barriers to entry and expansion in the market due to the complexity of developing, manufacturing, and integrating these technologies.
- Further challenges include miniaturisation, integration, energy management, functional complexity, communication, sensor integration, and cost. Managing the energy consumption of these motes is crucial, especially given their small size and the limitations on battery sise and capacity.
- Once deployed, controlling and managing these dust-sized particles becomes a daunting task. The technology’s minuscule size, while being its greatest strength, paradoxically emerges as a major obstacle to its widespread adoption. Ensuring that the sensors are working correctly, updating their software, and replacing them if necessary poses logistical challenges.
Environmental
- Concerns about the environmental impact of deploying large quantities of smart dust have been raised. The dispersal of tiny electronic sensors could have unforeseen environmental consequences, especially if they are not biodegradable or if they contain harmful substances.
Legal
- A primary barrier to the adoption of smart dust is the significant privacy concerns it raises. The ability of smart dust devices to discreetly monitor and process real-world phenomena could lead to invasive surveillance and data collection without consent. This issue is particularly troubling given the current legislative climate focused on enhancing consumer data protection. The potential for unauthorised access to sensitive information exacerbates these concerns, underscoring the need for robust cybersecurity measures and encryption protocols.
Behind the Report
An Analyst’s Perspective
This report was created by one of our Senior Research Analysts, Vrishti Saxena, using AMPLYFI AI .
To understand and hear more about her experiences in creating this report and about using our AI research tools, please watch below:
Subscribe For More Insights
Get exclusive content, cutting-edge tech news, research reports and actionable insights.