How to Make Image-Labeling Projects Successful

Categories

Project Success

While much ink and industry interest has focused on the role AI will play in radiology, the release of major AI datasets, and the development of novel algorithms, comparatively little has been said about a key part of the process — annotation of those datasets. Any data scientist will tell you the quality of an algorithm depends critically on the quality of the data used for training.

Data quality comprises two parts: the source images and the attributes assigned to those images by humans, directly or indirectly. The latter includes both text reporting information and visual annotations that are applied as a mask layer on top of the original images. Because annotations are often unreliably applied to images during the initial interpretation, they are typically added later by volunteer or contracted radiologists (the latter at no small expense).

Because of the enormous number of images requiring annotation and their associated expense, it is in the interest of any algorithm development team to carefully weigh the various approaches to image-labeling and to choose a software tool that maximizes the efficiency of their work. In this post, we discuss key considerations when planning your image-labeling process and share a checklist to help you select the right software.

Plan your attack

By both defining and documenting the annotation task as objectively and descriptively as possible — and by piloting your project on a small set of images — you can best determine your needs from day one. Before embarking on an image-labeling project, it’s important to pin down several key decisions in advance. Following are a few of the most important considerations.

Web-based vs. local image annotation

Where images will be stored, rendered, and annotated is a big factor in deciding on the best tool for labeling. Some labeling tools require the radiologist to download images to a local computer to perform the annotations. For the annotator, this model allows working offline, without an internet connection. For the researcher, this approach requires only a cloud storage service from which the annotator downloads images.

By contrast, a newer generation of cloud-based annotation services perform all of the computing on the server side and allow annotations through a web client. For the researcher, this model might require more work to develop a custom user interface, but it could also decrease the challenges faced by annotators who lack tech savvy or perform the work in a non-standard fashion (e.g. using a different color scheme that later needs to be re-mapped). In addition, software updates can more easily be pushed to users than local-based programs. On the downside, server or bandwidth lag can be a source of frustration when working with large images (high-resolution radiography and mammography in particular). Even small amounts of lag can have a major cumulative effect over the course of annotating of thousands of images.

Standards-based vs. open-source

Another important consideration is the distribution philosophy of the software provider. A company might choose to develop its own software for in-house annotation — make it accessible only locally on its network — and use proprietary labels. This is a good paradigm for a well-funded company that wishes to maintain total control of its annotations, perhaps monetizing the annotations, the images, or the software itself through licensing. However, external users are at the mercy of the developer’s internal logic for workflow, protocols, and labeling.

On the other hand, a company or individual researcher might choose open-source software to make its annotations. Those annotations could then be shared with everyone who uses that software. Alternatively, the annotations could be mapped to existing standards — such as the Annotation and Imaging Markup (AIM) project — and then imported to any software supporting the language. This has the advantage of using shared, nonproprietary lexicon and flexible workflows, though support might not be readily available.

AI-LAB ™, being developed and supported by the ACR, provides radiologists with the ability to generate standards based annotations. AI-LAB can also be used as a dataset management tool, integrating with a variety of annotation tools to manage and store the collection of annotations in a standards-based way.

Traditional input devices vs. drawing tools

Although most clinical image interpretation is done with a mouse and most reporting with a microphone or keyboard, these devices are not ideal for annotation work, such as drawing fine curves and pixel-level annotations. We find that drawpads or pen-enabled screens and tablets provide the most comfortable and efficient annotation experience.

What influences tool selection?

While each tool has its own quirks, the process required for annotation work determines the best tool for a project. Here is a checklist to help you determine the right fit:

• Are you annotating a single structure or multiple non-overlapping structures? When breaking down complex, overlapping structures, the methodology or steps in the segmentation becomes more and more important and should influence selection.

• Is annotation using a desktop computer with a 24” monitor and a mouse, or a laptop or tablet with pen/drawing ability? Select software that will function correctly with the hardware your annotators will use.

• How much bandwidth is available? This can be an issue for web-based platforms with larger studies. Even a few seconds of server lag can grind annotation to a halt, when images and/or software functions are not on a local machine or on a system designed to minimize lag.

• For segmentation of lesions and organs in 3-D datasets, does the software paste an annotation mapping from one slice the next, making the next slice easier? Does it go a step further, and allow you to skip a few slices and interpolate the interposing slices?

• How will annotation drawing change the workflow? There are tradeoffs in speed and accuracy when you are drawing a line or placing interpolation markers around an object. Decide in advance which is more critical.

• How will you evaluate the algorithm output? How will editing machine output for iterative algorithm learning happen? How might the system you choose limit editing ability? For example, some systems with interpolation markers won’t let you add more markers to what the computer has placed, making it difficult to redistribute markers around a complex structure.

• Is the amount of dexterity/control over fine annotations important for quality? The physical device for annotation matters. The right tool lets you have enough detail for segmentation tasks.

Concluding thoughts

By planning your project thoughtfully, asking the right questions from the beginning, you are laying the groundwork for a successful image-labeling effort. Every project is different. We hope sharing our experiences will give you a head start on the key decisions you’ll need to make to before beginning an initiative to annotate datasets for algorithm development.



Prasanth Prasanna, MD | Chief of Imaging Informatics, University of Utah Health Science Center, Salt Lake City, UT and Arjun Sharma, MD | Attending Radiologist, Suburban Radiologists, Oak Brook, IL

How to Make Image-Labeling Projects Successful

  • You may also like

    Solving Today's Informatics Problems, Building Tomorrow's Solutions

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.