Adopting the right methodology is the key to a successful implementation of AI in the Enterprise. From requirements analysis to cognitive architecture, modeling, implementation, verification, and validation, we advise on key steps, proven patterns, and pitfalls in your Enterprise AI adoption journey.
Using an Agile approach, we analyze your current Enterprise Architecture, business processes, tasks, operational decisioning, and knowledge assets to develop insights into AI-based solutions. AI can help your organization achieve operational excellence and unleash blue ocean opportunities.
To help you keep pace with rapid innovation in AI, we continuously monitor and evaluate the latest promising research findings. We build enterprise cognitive capabilities that are human-centered and go beyond pattern recognition to ensure continuous learning as well as compliant, safe, and explainable business decisions. In particular, we integrate existing organizational and domain knowledge with Machine Learning algorithms; distinguish prediction from causal inference; and represent uncertainty in a principled manner. In addition to algorithm validation, we also perform formal verification of AI systems for safety, fairness, and compliance.
How are we different? We're not selling you specialized hardware, software, or cloud services. We are not committed to a single hyped up approach to AI. Rather, we start with the practical requirements of the task at hand and seek synergies between different state-of-the-art AI methods as required to achieve your business objectives. In addition, our interdisciplinary approach for solving complex problems in AI allows us to draw on diverse bodies of knowledge such as neuroscience, the cognitive sciences, the laws of physics, economics, and complexity science.
About usWhat we do.
A passionate team of creative & innovative minds.
We are a passionate multidisciplinary team working toward Artificial General Intelligence (AGI).
Our goal is to use AI strategically to support your organizational goal of generating value and growth through innovation.
By the term 'mind,' I mean ideas and purposes.
First, we perform organizational, business process, task, agent, and system modeling using established standards. The outcome is a Cognitive Architecture which is well aligned with an overarching Enterprise Architecture and ensures continuous enterprise learning and explainable decisions.
Second, we implement AI methods based on methodological rigor, model validation, and transparent reporting. Representing uncertainty in a principled manner and distinguishing prediction from causality are key prerequisites for reliable business operational decisioning using Machine Learning.
Third, we use proven open source AI frameworks, test driven development, and integrate AI capabilities into modern software architectural patterns (e.g., Microservices, Event Sourcing, the Actor Model, Reactive patterns, and real-time stream processing). We perform formal verification of the system for safety, privacy, and compliance. For deployment, we advise on building an Enterprise AI infrastructure leveraging the cloud, specialized hardware (TPUs and GPUs), and modern DevOps automation tools and practices.
We conduct intensive R&D activities to identify proven AI best practices and engage in the type of out-of-the-box thinking that leads to innovation and breakthroughs. Indeed, unraveling the secrets of human intelligence is one of the grand challenges of science in the 21st century. It will require an interdisciplinary approach and may hold the key to building intelligent machines that can interact safely and efficiently with humans to solve a wide range of complex problems facing humanity.
We are working toward the long-term goal of Artificial General Intelligence (AGI). In the preface to the Proceedings of the Third Conference on Artificial General Intelligence in 2010, Eric Baum, Marcus Hutter, and Emanuel Kitzelmann wrote: "Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI — to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies." Marcus Hutter had previously proposed a foundational mathematical theory of Universal Artificial Intelligence (UAI) in his 2005 book titled Universal artificial intelligence: Sequential decisions based on algorithmic probability, and a universal algorithmic agent called AIXI.
Our goal is responsible and trustworthy AI in safety-critical domains such as transportation and healthcare. To account for the cognitive phenomena that underlie human performance in these domains, we take inspiration from what Edwin Hutchins called "cognition in the wild". In what follows, we will discuss generalization. In particular, we will review how learning abstract concepts allows humans to generalize across domains and tasks; compositionality or the human hability to compose new arbitrary concepts from existing ones (e.g., the concept of a hybrid creature of human, bird, and lion); how conceptualization and meaning are grounded in embodiment and sensorimotor systems; the human capacity for mental simulation and imagination; and why conceptual metaphorical mapping across domains is pervasive in human thought, language, discourse, and even commonsense reasoning.
Our interdisciplinary approach for solving complex problems in AI allows us to draw on diverse bodies of knowledge which include: biology, psychology, philosophy, cognitive science, neuroscience, mathematics, physics, biochemistry, medicine, statistics, computer science, and aviation. As the physicist Richard Feynman explained: "If our small minds, for some convenience, divide this universe, into parts — physics, biology, geology, astronomy, psychology, and so on — remember that nature does not know it!"
Our founder studied aviation flight operations and worked in the aviation and healthcare industries. We therefore have a bias in favor of building intelligent systems that are safe and reliable and operate in highly regulated environments.
Our methodology is based on the neuroscientist David Marr's three levels of analysis: computational, algorithmic, and implementational. By providing a higher level of abstraction, Marr's framework allows us to avoid premature algorithmic commitments in analyzing and developing AI applications. While Deep Learning is currently the dominant approach to AI in vision, language, speech recognition, and control in game play, we leave open the possibility that better approaches will emerge in the future.
For example, flight cognition — the study of the cognitive and psychological processes that underlie pilot performance, decision making, and human errors — can inform the design of Cognitive Architectures for safe and autonomous agents. These autonomous agents will be very helpful to humans during future airline single-pilot operations, crewed spaceflight missions into deep space, and the exploration of Mars. A Cognitive Architecture implements a computational model of various mechanisms and processes involved in cognition such as: perception, memory, attention, learning, causality, reasoning, decision making, planning, action, motor control, language, emotions, drives (such as food, water, and reproduction), imagination, social interaction, adaptation, self-awareness, metacognition, and consciousness (the "c-word" which we believe should be brought into the realm of scientific inquiry). These Cognitive Architectures will enable the design of autonomous agents that can interact safely and effectively with humans (human-like AI).
Similarly, from biology, neuroscience, and perhaps also physics, we can learn how hummingbirds develop fascinating learning and cognitive abilities (e.g., spatial memory, episodic-like memory, vision, motor control enabling sophisticated flight manoeuvers, and vocal learning) with tiny brains. This approach called Nature-Inspired Computing (NIC) can inform the development of more efficient intelligent machines. We are particularly intrigued by the thermodynamic efficiency of biological computation.
Deep neural networks (DNNs) have recently achieved impressive levels of performance in tasks such as object recognition, speech recognition, language translation, and control in game play. DNNs have proven to be effective at perception and pattern recognition tasks with high dimensional input spaces — a challenge for previous approaches to AI. However, they tend to overfit in low data regimes (most organizations don't have Google-scale data and computing infrastructure) and more work is needed to fully incorporate cognitive mechanisms and processes like memory, attention, commonsense reasoning, and causality.
Returning to our aviation example, we know that good pilots "stay ahead of the airplane". Through rigorous learning, simulation training, and planning, the pilot has acquired a mental model for reasoning about the flight. This mental model includes the aerodynamic, propulsion, and weather models. It allows the pilot to "stay ahead of the airplane" by maintaining situational awareness and by asking herself questions like: "What can happen next?" (prediction), "What if an unplanned situation arises?" (counterfactual causal reasoning), and "What will I do?" (procedural knowledge). For example, thunderstorm activity at the destination airport could force the pilot to divert the plane to an alternate airport or execute a go-around procedure during the approach to landing due to the presence of windshear. In a NASA Report titled Human Performance Contributions to Safety in Commercial Aviation, Cynthia H. Null et al. write: "Experience in the aircraft and the ability to mentally simulate its future state was needed to anticipate a required action, choose an appropriate action, and choose the implementation timeframe for the action."
Situational awareness is especially important during spatial disorientation in flight when the pilot's perception of the aircraft's attitude and spatial position turns into misperception. The pilot's awareness of her illusory perceptions allows her to rely on flight instruments to ensure flight safety. According to the US Federal Aviation Administration (FAA), "between 5 to 10% of all general aviation accidents can be attributed to spatial disorientation, 90% of which are fatal". A NATO report published in 2008 and titled Spatial Disorientation Training — Demonstration and Avoidance revealed that 25% percent of military aircraft accidents in the UK between 1983 and 1992 and 33% between 1993 and 2002 can be attributed to spatial disorientation. The air space is not the natural habitat in which the human body and mind evolved. The study of spatial disorientation in flight allows us to better disentangle the interactions between our bodily sensations, perception, and action.
In the Allegory of the Cave, Greek philosopher Plato offered insights into the nature of perception vs. reality. More recently, in his book Reality Is Not What It Seems, theoretical physicist Carlo Rovelli (founder of loop quantum gravity theory) wrote: "It is only in interactions that nature draws the world....The world of quantum mechanics is not a world of objects: it is a world of events."
The framework of Active Inference — introduced by the neuroscientist Karl Friston — views veridical and illusory perceptions as Bayesian inferences combining bodily sensations and prior beliefs based on the agent's generative model of the environment. The agent optimizes these inferences and reduces uncertainty through action (minimization of surprise and variational free energy). Vision, vestibular organs, and proprioception play a role in maintaining spatial orientation and also impact human cognition (Embodied and Enactive Cognition). Interoception on the other hand has been found to play a role in self-awareness, abstract concept learning, and emotional feelings like fear. The pilot uses her metacognitive abilities to monitor the accuracy and uncertainty of her perception of the environment and to assess and regulate her own reasoning and decision-making processes.
John O'Keefe, May-Britt Moser, and Edvard I. Moser won The 2014 Nobel Prize in Physiology or Medicine for discovering the grid cells and place cells that constitute the so called "inner GPS" in the brain. In a paper titled Navigating in a three-dimensional world, Kathryn J. Jeffery et al. review the role of place cells and grid cells in the hippocampal-entorhinal system and report that "the absence of periodic grid firing in the vertical dimension suggests that grid cell odometry was not operating in the same way as it does on the horizontal plane." In Navigating cognition: Spatial codes for human thinking, Jacob L. S. Bellmund et al. build on research on "Conceptual Spaces" by Peter Gärdenfors to introduce "Cognitive Spaces" whose dimensions of experience are mapped by place cells and grid cells to support general cognitive functions beyond spatial navigation. In Vector-based navigation using grid-like representations in artificial agents, Andrea Banino et al. develop a deep reinforcement learning agent with emergent grid-like representations whose performance "surpassed that of an expert human and comparison agents."
In aircraft equipped with an automated flight control system (fly-by-wire) and a glass cockpit, human-machine interaction must be carefully designed to avoid potentially catastrophic out-of-the-loop performance problems which can result from the loss of situational awareness when the pilot must regain manual control of the aircraft. Out-of-the-loop performance problems resulting from ill-conceived human-machine interaction should not be confused with human errors, hence the important concept of "human-centered design".
The human mind is also a very efficient learner. The FAA requires airline first officers (second in command) to hold an Airline Transport Pilot (ATP) certificate which requires a knowledge and practical tests and 1,500 hours total time of flying experience. Up to 100 hours of the required flying experience can be accumulated on a full flight simulator. In contrast, Google's AlphaGo — designed using an approach to AI known as Deep Reinforcement Learning (DRL) — played more than 100 million game simulations. The latest incarnation of AlphaGo called AlphaZero used 5,000 tensor processing units (TPUs) and required significantly less game simulations to achieve superhuman performance at the games of chess, shogi, and Go. A previous incarnation called AlphaGo Zero used graphics processing units (GPUs) to train the deep neural networks through self-play with no human knowledge except the rules of the game.
How applicable is AlphaGo's approach to real world decision problems? In Go, the states of the game are fully observable which enables learning through self-play with Monte-Carlo Tree Search (MCTS). On the other hand, partial observability is typical of real world environments. Also, it is hard to imagine how an AI system can learn tabula rasa — with no human knowledge — through self-plays in the domain of aerospace. Such an AI system would have to rediscover the 300 years old Newton's and Euler's laws and the Navier-Stokes equations — the foundations of modern aerodynamics. Isaac Newton himself once famously remarked: "If I have seen further than others, it is by standing upon the shoulders of giants." Therefore, we explore physics-aware machine learning approaches.
Furthermore, the lack of explanability of the policies learned by DRL agents remains an impediment to their use in safety-critical applications like aviation. Nonetheless, DRL (preferably the model-based variant) can be helpful in teaching complex tasks like autonomous aircraft piloting to a robot, although we believe that DRL alone does not account for all the cognitive phenomena that underlie the performance of human pilots (more on that later). According to the International Air Transport Association (IATA), the 2019 fatality risk per millions of flights was 0.09. Beyond automation with current autopilot systems, the increasing demand for air travel worldwide will create a need for machine autonomy. The Canadian Council for Aviation and Aerospace (CCAA) predicts a shortage of 6,000 pilots in Canada by 2036.
We subscribe to the No Free Lunch Theorem (introduced by David Wolpert and William Macready) and have experience in various state-of-the-art approaches to AI including: symbolic, connectionist, bayesian, frequentist, and evolutionary. In building cognitive systems, we seek synergies between these approaches. For example, Bayesian Deep Learning can help represent uncertainty in deep neural networks in a principled manner — a requirement for domains such as healthcare. Bayesian Decision Theory is also a principled methodology for solving decision making problems under uncertainty. We see Deep Generative Models combining Deep Learning and probabilistic reasoning as a promising avenue for unsupervised and human-like learning including concept learning, one-shot or few-shot generalization, and commonsense reasoning. Reminiscent of human metacognition, Meta Learning (or learning to learn) for Reinforcement Learning and Imitation Learning have generated a lot of interest at the latest NeurIPS conference and holds promise of learning algorithms that can generate algorithms tailored to specific domains and tasks.
Lately, there has been a resurgence of evolutionary algorithms proposed as an alternative to established Reinforcement Learning algorithms (like Q-learning and Policy Gradients) or as an efficient mechanism for training deep neural networks (neuroevolution). Evolutionary algorithms are also amenable to embarrassingly parallel computations on commodity hardware.
In addition, rumors of the demise of logic in AI in favor of statistical learning methods (in the era of Machine Learning and Deep Learning hype) have been greatly exaggerated. Since the seminal Dartmouth AI workshop of 1956, decades of research in logic-based methods (e.g., classical, nonmonotonic, probabilistic, description, modal, and temporal logics) have produced useful commonsense reasoning capabilities that are lacking in today's Deep Learning and Reinforcement Learning systems which are essentially based on pattern recognition. This lack of reasoning abilities in AI systems can potentially lead to sample inefficiency or difficulties in providing formal guarantees of system behavior — a concern that is exacerbated by known vulnerabilities such as adversarial attacks against Deep Neural Networks and reward hacking in Reinforcement Learning.
Real world safety-critical systems like aircraft are indeed required by regulation to go through a formal verification process for certification. Consider the following rule in the US federal aviation regulations: "When aircraft are approaching each other head-on, or nearly so, each pilot of each aircraft shall alter course to the right." (14 CFR part 91.113(e)). This rule can be easily specified in a logic-based formalism such as probabilistic temporal logic to account for sensor and perception uncertainty. We can then formally verify that an autonomous robotic pilot complies with the rule. A Deep Reinforcement Learning approach would require trial-and-error using a very large number of flight simulations, although a hybrid approach consisting of generating DRL policies that satisfy probabilistic temporal logic constraints is a possibility. Autonomous agents like robotic pilots must comply with the laws, regulations, and ethical norms of the country in which they operate — a concept related to algorithmic accountability.
Another issue is that modern machine learning algorithms like Deep Neural Networks (DNNs) and Random Forests are data-hungry. Organizations with low-data volumes can jumpstart their adoption of AI by modeling and automating their business processes and operational decisions with logic-based methods. For example, prior to 2009, less than 10% of US hospitals had an Electronic Medical Record (EMR) system. Logic-based Clinical Decision Support (CDS) systems for medical Knowledge Representation and Reasoning (KRR) have been successfully deployed for the automatic execution of Clinical Practice Guidelines (CPGs) and care pathways at the point of care. Description Logic (DL) is the foundation of the Systematized Nomenclature of Medicine (SNOMED) — an ontology which contains more than 300,000 carefully curated medical concepts organized into a class hierarchy and enabling automated reasoning capabilities based on subsumption and attribute relationships between medical concepts.
The clinical algorithms in CPGs often require the automated execution of highly accurate and precise calculations (over multiple clinical concept codes and numeric values) which are better performed with a logic-based formalism. An example is a clinical recommendation based on multiple diagnoses or co-morbidities, the patient's age and gender, physiological measurements like vital signs, and laboratory result values. Consider the following rule from the 2013 American College of Cardiology Foundation/American Heart Association (ACCF/AHA) Guideline for the Management of Heart Failure: "Aldosterone receptor antagonists (or mineralocorticoid receptor antagonists) are recommended in patients with NYHA [New York Heart Association] class II-IV HF [Heart Failure] and who have LVEF [left ventricular ejection fraction] of 35% or less, unless contraindicated, to reduce morbidity and mortality. Patients with NYHA class II HF should have a history of prior cardiovascular hospitalization or elevated plasma natriuretic peptide levels to be considered for aldosterone receptor antagonists. Creatinine should be 2.5 mg/dL or less in men or 2.0 mg/dL or less in women (or estimated glomerular filtration rate > 30 mL/min/1.73 ㎡), and potassium should be less than 5.0 mEq/L. Careful monitoring of potassium, renal function, and diuretic dosing should be performed at initiation and closely followed thereafter to minimize risk of hyperkalemia and renal insufficiency". Healthcare payers have established strict quality measures to ensure physicians' concordance with clinical practice guidelines.
Machine autonomy in the care management of patients goes counter to the principle of shared decision making in medicine. Legal scholars and lawyers should decide whether existing doctrines of informed consent are still relevant or should be updated. In the meantime, the use of AI should be disclosed to patients in routine care. This can be done as part of the well-established principle of shared decision-making which considers the values, goals, and preferences of the patient during care planning. Argumentation Theory is a good old branch of AI that can help reconcile AI recommendations, uncertainty, risks and benefits, patient preferences, clinical practice guidelines, and other scientific evidence. As a guide to rational clinical decision making (by evaluating and communicating the pros and cons of various courses of action), the implementation of Argumentation Theory may also reduce physician's exposure to liability by generating arguments for potential jurors. This approach empowers both the patient and clinician to reason given the fact that modern AI algorithms like Deep Learning are based on pattern recognition and lack logical and causal reasoning abilities. In their paper titled Why do humans reason? Arguments for an argumentative theory, Hugo Mercier and Dan Sperber wrote: "Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade."
Previous efforts like Statistical Relational Learning (SRL) and Neural-Symbolic integration (NeSy) effectively combined logic and statistical learning to achieve sophisticated cognitive reasoning. Progress in AI should build on these previous efforts and we believe that the symbol grounding problem (SGP) can be addressed effectively in hybrid AI architectures comprising both symbolic and sub-symbolic representations through the autonomous agent's embodiment and sensorimotor interaction with the environment. German mathematician, theoretical physicist, and philosopher Hermann Weyl (1885-1955) once remarked that "logic is the hygiene the mathematician practices to keep his ideas healthy and strong." French mathematician Jacques Hadamard (1865-1963) stated that "logic merely sanctions the conquests of the intuition." Given the current shaky theoretical foundations of Deep Learning, it would be wise to heed the wisdom of these giants.
We believe that a unified theory of machine intelligence is needed, and we are approaching that challenge from a complex systems theory perspective. Networks (of arbitrary computational nodes, not just neural networks), emergence, and adaptation are key relevant concepts in the theory of complex systems. In particular, intelligent behavior can emerge from the interaction between diverse learning algorithms. In their drive to survive, predators and prey co-evolve through their interaction in nature. Our current biosphere is the product of 3.8 billion years of evolution and adaption. The body of a human contains in average between 30 and 40 trillion of cells. The ratio of the number of bacteria in the microbiome over the number of human cells is commonly estimated to be 10 to 1. A paper published in 2016 by Ron Sender, Shai Fuchs, and Ron Milo and titled Revised Estimates for the Number of Human and Bacteria Cells in the Body puts the ratio at 1.3 to 1. The complex interaction networks between hosts (human, animal, or plant) and their microbiomes — including their role in health and disease — are forcing evolutionary biologists to reconsider our notion of the individual or "self". This includes the interaction between the gut microbiome and the brain in humans. In a paper titled A Symbiotic View of Life: We Have Never Been Individuals, Scott F. Gilbert, Jan Sapp, and Alfred I. Tauber wrote: "Thus, animals can no longer be considered individuals in any sense of classical biology: anatomical, developmental, physiological, immunological, genetic, or evolutionary. Our bodies must be understood as holobionts whose anatomical, physiological, immunological, and developmental functions evolved in shared relationships of different species." Culture also plays an important role in both cognition and evolution. The ubiquitous nature of networks (e.g., social and biological networks) will drive the implementation of graph neural networks (GNNs).
An important reason why humans are efficient learners is that we are able to learn concepts and can represent and reason over those concepts efficiently. Pilots take a knowledge test to prove understanding of key concepts and the logical and causal connections between these concepts. Exam topics include: aerodynamics, propulsion, flight control, aircraft systems, the weather, aviation regulations, and human factors (aviation physiology and psychology). This knowledge and a lifetime of accumulated experience came in handy when on January 15, 2009, Captain Chesley "Sully" Sullenberger made the quick decision to ditch the Airbus A320 of US Airways flight 1549 on the Hudson River after the airplane experienced a complete loss of thrust in both engines due to the ingestion of large birds. All 150 passengers and 5 crewmembers survived. Simulation flights were conducted as part of the US National Transportation Safety Board (NTSB) investigation. In the final accident report, the NTSB concluded that "the captain's decision to ditch on the Hudson River rather than attempting to land at an airport provided the highest probability that the accident would be survivable." This event provides a good case study for AI research about decision making under not only uncertainty but also time pressure and high workload and stress levels.
In addition, humans are able to compose new concepts from existing ones — a thought process that Albert Einstein referred to as "combinatorial play". Learning abstract concepts also allows humans to generalize across domains and tasks — a requirement for continuous (life-long) learning in AI systems. For example, concepts learned in the aviation domain — simulation, checklists, Standard Operating Procedures (SOP), Crew Resource Management (CRM), and debriefings — have been successfully applied to medicine. This ability to learn, compose, reason over, generalize, and contextualize abstract concepts is related to language as well. We are particularly intrigued by the pervasive use of argumentation and conceptual metaphors in human thought, language, and discourse. Current Deep Learning architectures fail to represent these abstract concepts which are the basis of human thought, imagination, and ingenuity. Therefore, we explore novel approaches to concept representation, commonsense reasoning, and language understanding. Effective and safe machine autonomy will also require the implementation of important cognitive mechanisms such as intrinsic motivation, attention, episodic and counterfactual thinking, metacognition, and understanding the physics and causal structure of the world (causality).
Human and animal cognition evolved under bounded computational resources. The average power consumption of the human brain is about 20 Watts. We believe that the way forward is energy-efficient AI. Some would argue that as long as the energy consumption is 100% renewable, the current approach of data-hungry and energy-hungry brute force Deep Learning is sustainable. It is an approach to AI that favors large corporations with deep pockets but has not led to major breakthroughs in Artificial General Intelligence (AGI). It has led instead to a troubling brain drain from academia. Despite impressive results on certain tasks, recent transformer architectures like BERT still rely on spurious statistical regularities in humongous data sets.
In a paper titled Energy and Policy Considerations for Deep Learning in NLP, Emma Strubell, Ananya Ganesh, and Andrew McCallum estimated the carbon footprint from training a single Deep Learning model with 213M parameters using a Transformer with neural architecture search at 626,155 pounds of carbon dioxide equivalent compared to 1,984 for a passenger flying round-trip in an airliner from New York to San Francisco. According to an article published in Bloomberg Green in April 2020 and titled Google Data Centers' Secret Cost: Billions of Gallons of Water, the "internet giant taps public water supplies that are already straining from overuse." In contrast, the United States Geological Survey (USGS) Water Science section estimates that each person uses about 80-100 gallons of water per day. Energy consumption and heat dissipation are also important challenges for edge devices like smartphones, virtual reality devices, and drones. We believe that progress toward AGI will accelerate when we accept that cognition (biological or artificial) is fundamentally resource-bounded.
The ability of AI agents to acquire meaning is a complicated subject that goes beyond the Turing test and shouldn't be conflated with virtual assistants like Siri or Alexa executing voice commands. Cognitive scientists have studied and proposed different theories to explain the emergence of meaning. One school of thought suggests that meaning is rooted in the agent's embodiment and sensorimotor interaction with its environment. How does the framework of Active Inference relate to meaning? The answer is human imagination or the ability for mental simulation. Evidence suggests that the execution of an action and the off-line mental simulation of that action recruit the same neural substrate. Conceptualization and meaning are grounded in our sensorimotor experiences and memories. This also explains why conceptual metaphors are pervasive in human thought, language, discourse, and even commonsense reasoning. Conceptual Metaphor Theory was introduced by George Lakoff and Mark Johnson in their book Metaphors We Live By. A good example of a metaphor is when Michelle Obama famously said: "When they go low, we go high". What is needed is AI agents that can move around and interact with the world and people the way human infants do. This is a lot harder than throwing humongous datasets at Deep Learning (DL) algorithms and seeing what sticks. Although Deep Learning algorithms like GPT-2 and BERT can be very useful and effective for certain tasks (for example, BERT has been used to improve Google Search with advertising revenue contributing to at least 80% of Alphabet's total revenue in 2019), they have no understanding of the data they process. Therefore, we take an embodied and enactive view of cognition in our research in Cognitive Robotics.
Our expertise includes state-of-the-art AI methods like eXtreme Gradient Boosting, Gaussian Processes, Bayesian Optimization, Probabilistic Programming, Variational Inference, Deep Generative Models, Deep Reinforcement Learning, Causal Inference, Probabilistic Graphical Models (PGMs), Statistical Relational Learning, Computational Logic, Neural-Symbolic integration, and Evolutionary algorithms. We pay special attention to algorithmic transparency, interpretability, and accountability. We use techniques like human-centered design, simulation, and Visual Analytics to help end users understand risk, uncertainty, causality, and evidence.
At the implementational level, we focus on system requirements such as high-throughput, low-latency, fault tolerance, security, privacy, and compliance. We meet these requirements through a set of software architectural patterns (e.g., task parallelism and the Actor model), adequate testing, and specialized hardware. We also advise on patterns and pitfalls for avoiding Machine Learning technical debt.
We make a distinction between the verification & validation (V&V) of cyber-physical systems with embedded AI algorithms and the auditing of various activities during the system's lifecycle. In regulated industries like aviation, verification is typically part of a certification process using formal methods like probalistic temporal logics. There is a growing literature on the use of formal methods based on probabilisic verification for providing provable guarantees of the robustness, safety, and fairness of Machine Learning algorithms. Formal methods are different from traditional Machine Learning testing approaches such as cross-validation and the Bootstrap. This approach allows AI systems to fit nicely into existing regulatory frameworks for verification (e.g., DO-333, Formal Methods Supplement to DO-178C for avionics software) and auditing (e.g., FAA Stage of Involvement audits) as opposed to creating new AI regulations.
Obviously, the need for good project leadership, management, and governance applies to AI projects as well.
Experience and wisdom.
Chief Technology Officer (CTO)
Chief Operating Officer
The Cognitive Enterprise Transformation® methodology
Intelligence is the ability to avoid doing work, yet getting the work done.
- Linus Torvalds