Publications
For an updated list please visit Google Scholar. * denotes equal contribution.
2025
2024
- AISYMotion Enhanced Multi-Level Tracker (MEMTrack): A Deep Learning-Based Approach to Microrobot Tracking in Dense and Low-Contrast EnvironmentsMedha Sawhney, Bhas Karmarkar, Eric J. Leaman, and 3 more authorsAdvanced Intelligent Systems, 2024
Tracking microrobots is challenging due to their minute size and high speed. In biomedical applications, this challenge is exacerbated by the dense surrounding environments with feature sizes and shapes comparable to microrobots. Herein, Motion Enhanced Multi-level Tracker (MEMTrack) is introduced for detecting and tracking microrobots in dense and low-contrast environments. Informed by the physics of microrobot motion, synthetic motion features for deep learning-based object detection and a modified Simple Online and Real-time Tracking (SORT)algorithm with interpolation are used for tracking. MEMTrack is trained and tested using bacterial micromotors in collagen (tissue phantom), achieving precision and recall of 76% and 51%, respectively. Compared to the state-of-the-art baseline models, MEMTrack provides a minimum of 2.6-fold higher precision with a reasonably high recall. MEMTrack’s generalizability to unseen (aqueous) media and its versatility in tracking microrobots of different shapes, sizes, and motion characteristics are shown. Finally, it is shown that MEMTrack localizes objects with a root-mean-square error of less than 1.84 μm and quantifies the average speed of all tested systems with no statistically significant difference from the laboriously produced manual tracking data. MEMTrack significantly advances microrobot localization and tracking in dense and low-contrast settings and can impact fundamental and translational microrobotic research.
@article{msawhney2024memtrack, author = {Sawhney, Medha and Karmarkar, Bhas and Leaman, Eric J. and Daw, Arka and Karpatne, Anuj and Behkam, Bahareh}, title = {Motion Enhanced Multi-Level Tracker (MEMTrack): A Deep Learning-Based Approach to Microrobot Tracking in Dense and Low-Contrast Environments}, journal = {Advanced Intelligent Systems}, pages = {2300590}, year = {2024}, keywords = {bacteria, biohybrid microrobotics, collagen, computer vision, machine learning, multiobject tracking, object detection}, doi = {https://doi.org/10.1002/aisy.202300590}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/aisy.202300590}, eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/aisy.202300590}, category = {Journal Publications}, dimension = {true}, publisher = {Wiley Online Library}, }
- NeurIPS 2024VLM4Bio: A Benchmark Dataset to Evaluate Pretrained Vision-Language Models for Trait Discovery from Biological ImagesM. Maruf, Arka Daw, Kazi Sajeed Mehrab, and 19 more authorsIn The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024
Images are increasingly becoming the currency for documenting biodiversity on the planet, providing novel opportunities for accelerating scientific discoveries in the field of organismal biology, especially with the advent of large vision-language models (VLMs). We ask if pre-trained VLMs can aid scientists in answering a range of biologically relevant questions without any additional fine-tuning. In this paper, we evaluate the effectiveness of 12 state-of-the-art (SOTA) VLMs in the field of organismal biology using a novel dataset, VLM4Bio, consisting of 469K question-answer pairs involving 30K images from three groups of organisms: fishes, birds, and butterflies, covering five biologically relevant tasks. We also explore the effects of applying prompting techniques and tests for reasoning hallucination on the performance of VLMs, shedding new light on the capabilities of current SOTA VLMs in answering biologically relevant questions using images.
2023
- CVPRDetecting and Tracking Hard-to-Detect Bacteria in Dense Porous BackgroundsMedha Sawhney*, Bhas Karmarkar*, Eric Leaman, and 3 more authors2023Oral + Poster Presentation in CV4Animals Workshop at CVPR 2023
Studying bacteria motility is crucial to understanding and controlling biomedical and ecological phenomena involving bacteria. Tracking bacteria in complex environments such as polysaccharides (agar) or protein (collagen) hydrogels is a challenging task due to the lack of visually distinguishable features between bacteria and surrounding environment, making state-of-the-art methods for tracking easily recognizable objects such as pedestrians and cars unsuitable for this application. We propose a novel pipeline for detecting and tracking bacteria in bright-field microscopy videos involving bacteria in complex backgrounds. Our pipeline uses motion-based features and combines multiple models for detecting bacteria of varying difficulty levels. We apply multiple filters to prune false positive detections, and then use the SORT tracking algorithm with interpolation in case of missing detections. Our results demonstrate that our pipeline can accurately track hard-to-detect bacteria, achieving a high precision and recall.
- ArXivMEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense and Low-Contrast EnvironmentsMedha Sawhney, Bhas Karmarkar, Eric J. Leaman, and 3 more authorsarXiv, 2023
Tracking microrobots is challenging, considering their minute size and high speed. As the field progresses towards developing microrobots for biomedical applications and conducting mechanistic studies in physiologically relevant media (e.g., collagen), this challenge is exacerbated by the dense surrounding environments with feature size and shape comparable to microrobots. Herein, we report Motion Enhanced Multi-level Tracker (MEMTrack), a robust pipeline for detecting and tracking microrobots using synthetic motion features, deep learning-based object detection, and a modified Simple Online and Real-time Tracking (SORT) algorithm with interpolation for tracking. Our object detection approach combines different models based on the object’s motion pattern. We trained and validated our model using bacterial micro-motors in collagen (tissue phantom) and tested it in collagen and aqueous media. We demonstrate that MEMTrack accurately tracks even the most challenging bacteria missed by skilled human annotators, achieving precision and recall of 77% and 48% in collagen and 94% and 35% in liquid media, respectively. Moreover, we show that MEMTrack can quantify average bacteria speed with no statistically significant difference from the laboriously-produced manual tracking data. MEMTrack represents a significant contribution to microrobot localization and tracking, and opens the potential for vision-based deep learning approaches to microrobot control in dense and low-contrast settings. All source code for training and testing MEMTrack and reproducing the results of the paper have been made publicly available this https URL.
@article{sawhney2023memtrack, title = {MEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense and Low-Contrast Environments}, author = {Sawhney, Medha and Karmarkar, Bhas and Leaman, Eric J. and Daw, Arka and Karpatne, Anuj and Behkam, Bahareh}, year = {2023}, eprint = {2310.09441}, archiveprefix = {arXiv}, primaryclass = {cs.CV}, category = {Preprints}, dimension = {true}, journal = {arXiv}, url = {https://arxiv.org/abs/2310.09441}, }
2022
- bioRxivDeep Learning Enabled Label-free Cell Force Computation in Deformable Fibrous EnvironmentsAbinash Padhi, Arka Daw, Medha Sawhney, and 5 more authorsbioRxiv, 2022
Through force exertion, cells actively engage with their immediate fibrous extracellular matrix (ECM) environment, causing dynamic remodeling of the environment and influencing cellular shape and contractility changes in a feedforward loop. Controlling cell shapes and quantifying the force-driven dynamic reciprocal interactions in a label-free setting is vital to understand cell behavior in fibrous environments but currently unavailable. Here, we introduce a force measurement platform termed crosshatch nanonet force microscopy (cNFM) that reveals new insights into cell shape-force coupling. Using a suspended crosshatch network of fibers capable of recovering in vivo cell shapes, we utilize deep learning methods to circumvent the fiduciary fluorescent markers required to recognize fiber intersections. Our method provides high fidelity computer reconstruction of different fiber architectures by automatically translating phase-contrast time-lapse images into synthetic fluorescent images. An inverse problem based on the nonlinear mechanics of fiber networks is formulated to match the network deformation and deformed fiber shapes to estimate the forces. We reveal an order-of-magnitude force changes associated with cell shape changes during migration, forces during cell-cell interactions and force changes as single mesenchymal stem cells undergo differentiation. Overall, deep learning methods are employed in detecting and tracking highly compliant backgrounds to develop an automatic and label-free force measurement platform to describe cell shape-force coupling in fibrous environments that cells would likely interact with in vivo.Competing Interest StatementThe authors have declared no competing interest.
@article{Padhi2022.10.24.513423, author = {Padhi, Abinash and Daw, Arka and Sawhney, Medha and Talukder, Maahi M. and Agashe, Atharva and Kale, Sohan and Karpatne, Anuj and Nain, Amrinder S.}, title = {Deep Learning Enabled Label-free Cell Force Computation in Deformable Fibrous Environments}, elocation-id = {2022.10.24.513423}, year = {2022}, doi = {10.1101/2022.10.24.513423}, publisher = {Cold Spring Harbor Laboratory}, url = {https://www.biorxiv.org/content/early/2022/10/25/2022.10.24.513423}, eprint = {https://www.biorxiv.org/content/early/2022/10/25/2022.10.24.513423.full.pdf}, journal = {bioRxiv}, category = {Preprints}, biorxiv = {10.1101/2022.10.24.513423v1}, dimension = {true}, }