March 17, 2025

POCUSim – Everything Old is New Again

Elias Jaffa, MD, MS, FACEP, FPD-AEMUS
Assistant Professor, Department of Emergency Medicine, Yale School of Medicine.
Disclosure: iMerit – recent consulting work for AI company building products related to ultrasound

Content

In the context of education, it can be helpful to think of point-of-care ultrasound (POCUS) as the sum of three distinct skill domains: (1) image acquisition, (2) image interpretation, and (3) clinical integration. Much like any other skill, improvement often requires isolation and deliberate focus on one or more sub-skills at a time. Image acquisition has received a substantial amount of attention over the years and rightly so, given that it is often the most challenging skill and thus the primary source of the “user dependence” that is so often maligned by those in other fields. The typical approach focuses on building reps, initially under close supervision with normal models (often co-learners) followed by a dedicated ultrasound rotation. These rotations also represent our usual approach to teaching image interpretation, both in real time while scanning patients and through image review/QA sessions.

Clinical integration, however, can be a significant challenge to practice or test outside of actual patient care. Incorporating POCUS into high-fidelity case simulations (HFCSs) seems a natural choice, but anyone who has attempted this is likely all too aware of the challenges it poses. Cases require abnormal images, so live-scanning with a volunteer is generally out of the question. Several systems have come to market that can generate semi-realistic ultrasound images in real time (eg, Vimedix, SonoSim LiveScan, etc), though these systems are often prohibitively expensive and have not yet resulted in anything better than a cartoonish facsimile of real ultrasound images. Without other options, many of us have simply resigned ourselves to simply allowing learners to request certain ultrasound views and responding by displaying the requested view, at best by showing a video clip on a screen or at worst by handing learners a printed still image.

Several years ago, Kulyk and Olszynski created a workable alternative called edus21 in which a set of RFID tags could be placed on a sim body (representing standard locations of ultrasound views, such as RUQ/LUQ/etc). These tags could then be read by an inexpensive RFID reader, triggering a specific video clip to be played on a screen as though the learners were acquiring the images themselves in real time, thus affording an opportunity to practice (or test) both image interpretation and clinical integration outside of a real patient interaction. Unfortunately, despite the authors’ laudable attempts to make the system widely available by publishing the source code on GitHub2 and instructions to build the system on their website,3 building the system required a fair amount of technical knowledge. A subsequent attempt was made by Damjanovic, et al4 to update this system to one based in HTML so it could be run in a web browser, thus simplifying the process and expanding its use to other educators and learners. While this source code is also available online, it remains relatively challenging to set up and somewhat unwieldy to use, lacking a truly modern user interface (UI) and still requiring clips to be carefully renamed or otherwise hard-coded into the program itself.

Of course, these days, everything old is new again, thus POCUSim was born! I have tried to make the latest iteration of this system a truly useful and usable tool for education by removing as many technical hurdles as possible and updating the UI to something that I hope will be intuitive for users of any skill level to use. The system is currently hosted on a personal server and is accessible at https://pocusim.app and while a more robust and feature-rich version of this system may become available in the future, I am committed to ensuring the current version remains freely available to anyone wishing to use it for the foreseeable future. Simply navigate to the URL, click “Configure the case”, plug in your own clips and tag IDs, and you’re good to go.

Aside from an intuitive interface and ease of use, another of my primary goals was to leave as much control in the hands of the user as possible. It is therefore worth noting that no data provided by users leaves their browser – any clip “uploaded” to the system is simply stored in the local memory provided by their browser (using a system called IndexedDB5) and can be easily wiped at any time.

For a simple demonstration of how to use the system, please refer to this video POCUSim Demo. If anyone experiences errors or bugs, or simply has suggestions for improvements, my inbox is always open.

References

  1. Olszynski PA, Harris T, Renihan P, D’Eon M, Premkumar K. Ultrasound during Critical Care Simulation: A Randomized Crossover Study. CJEM. 2015;18(3):183-90.
  2. Kulyk, P. edus2 - The Emergency Department Ultrasound Simulator. 2011 7 Feb 2013 31 Dec 2024]; Available from: https://github.com/asclepius/edus2.
  3. Kulyk, P. edus2 - Emergency Department Ultrasound Simulator. 2011 31 Dec 2024]; Available from: https://www.edus2.com/.
  4. Damjanovic D, Goebel U, Fischer B, Huth M, Breger H, Buerkle H, et al. An easy-to-build, low-budget point-of-care ultrasound simulator: from Linux to a web-based solution. Crit Ultrasound J. 2017;9(1):4.
  5. Mozilla. IndexedDB API. 31 Dec 2024]; Available from: https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API.
[ Feedback → ]