DeepCore Cogifier

DeepCore Cogifier

DeepCore has a new capability! While the DeepCore team has been primarily focused on it’s machine learning and computer vision mission, we’re also interested in things that directly support that mission. Overhead imagery tends to be large, which makes it more expensive to send across the wire for inference jobs. However, there are image formats that make imagery more palatable in the cloud context.

Enter Cloud Optimized GeoTiffs (COGs). From the trade website, COGs are files that have internal pyramids to make them easier to stream over HTTP, allowing the caller to only request parts of the image, rather than the entire image. This limits time across the wire, making transmission much faster, as well as more targeted.

Pyramids separate GeoTIFFs from COGs

DeepCore Cogifier takes advantage of our processing pipeline, allowing for concurrent source file read, pyramid generation, and output file write. This concurrent processing allows us to complete COG generation from an input source file much faster than traditional linear processing.

GDAL now has support for creating COGs as a part of it’s current master branch, to be released in version 3.1. Since GDAL is the de-facto standard for open source imagery libraries, we pitted our new DeepCore Cogifier against it to see how we stack up.

On our test system, we compared GDAL’s creation of a COG from a single 52.1 GB GeoTiff from the DigitalGlobe archives to DeepCore Cogifier. GDAL took nearly 34 minutes to create a COG copy of the input file, while DeepCore Cogifier took less than 5 minutes to create an new COG. DeepCore’s concurrent processing allows us to shave nearly 85% of the time required to create a COG!

Just get what you need!

Stay tuned for other new tools and features coming up in future DeepCore releases! Please let us know if you would like more information…

 

No Comments

Add your comment