Artificial intelligence (AI) has permeated our lives. Our phones unlock at the sight of our faces. We can have entire text conversations with ChatGPT. Amazon knows what I am looking for, and my email finishes my sentences with uncanny accuracy.
AI may seem magical, but these solutions are based on deep learning and neural networks (NNs), which only require a little calculus—and lots of data and computing power.
The first NNs, proposed in the 1960s, aimed to emulate human brains by perceiving stimuli (inputs), processing them with interconnected layers of “artificial neurons” and producing responses (outputs). For example, facial recognition on phones is trained to accept an input image and answers, “Is this person my owner?” If yes, it unlocks.
Inside an NN, each pair of neurons has a “knob” controlling how strongly a signal is passed from one to the next. “Training” an NN involves tweaking these knobs until the NN consistently maps a large training dataset of inputs to their desired outputs. This tweaking of millions or billions of knobs is guided by calculus to minimize errors in the outputs. Effective NNs learn to produce desired training outputs but also generalize to work with new inputs they encounter.
At Florida Tech’s NEural TransmissionS (NETS) Lab, we study deep learning and develop our own NNs. Concerningly, NNs make mistakes for unknown reasons, which makes high-stakes deployments risky. Much of our work focuses on these failure modes, assessing why they occur and what we can do about them.
Led by Ph.D. student Mackenzie Meni, we developed a technique called PEEK that “peeks” into the inner workings of NNs to visualize what details they are focusing on. PEEK explains NN decisions and reveals data biases. Excitingly, PEEK can frequently discern the correct outputs from the inner workings, even when the NN fails to produce them. Ongoing work aims to use these “corrected” outputs as a fail-safe to catch and correct errors on the fly.
The versatility of NNs allows us to collaborate across disciplines. We work regularly with aerospace and biomedical engineers.
With Ph.D. student Trupti Mahendrakar ’21 M.S. of the Autonomy Lab, we developed vision and guidance algorithms for autonomous satellite swarms for the Air Force Research Laboratory (AFRL), with ongoing work on human guided vision algorithms.
Ph.D. student Nehru Attzs ’16, ’19 M.S., is developing an algorithm to track satellite components in real time.
Ph.D. student Arianna Issitt ’23 and I are currently summer faculty/graduate fellows at the AFRL, working on a project to send chaser satellites on inspection orbits around spacecraft, capturing images to build 3D reconstructions. We are designing optimal inspection orbits and deploying them on spaceflight computers.
Additionally, we collaborate with the Multiscale Cardiovascular Fluids Laboratory to develop NNs estimating blood flow dynamics within patient blood vessels noninvasively in real time. This can enable medical teams to make rapid diagnoses and treatment plans for cardiovascular disease patients.
The efforts of the NETS Lab aim to provide a deeper understanding of AI broadly speaking and design effective solutions for safety-critical applications in spaceflight and medicine.
Citation:
Artificial intelligence: Math, not magic (2024, November 3)
retrieved 3 November 2024
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.