For each original image, we run a variety of algorithms and mathematical analyses in order to show features that correlate with the conditions or tissue types we are interested in for any given Use Case (imaging problem to be solved). Each original diagnostic image is transformed as follows.
- New Processed Image(s) – Using a combination of mathematical and imaging algorithms, PetView algorithms use pixel-level patterns from the original to generate a new image, where specific patterns can be seen by the human eye. For example, our eyes can discern about 19% of the shades of gray in a typical human mammogram. Now you can access more of this information that until now has been hidden in plain sight. With more tissue features revealed, we can correlate various signature patterns not only with various types of tissues but also with various conditions.
- Machine Learning Decision Support – While pixel-level data in original images are mostly amorphous, the feature data generated to create the New Processed Images are in a much more structured form. The quality of any machine learning model is only as good as the data used to build it and run it. PetView utilizes this now structured data to build machine learning models using signature features that highly correlate with specific tissue types/conditions. This model can then be used to predict various tissue types or conditions on future images with remarkable accuracy.