PyTorch image classification available

Today, a new library for performing image classification has made its debut:


The library is based on the PyTorch example code for imagenet. For ResNet-based networks, you can finetune pretrained models on your own data rather than just using the imagenet dataset. In addition, you can make predictions (single and batch/continuous), output information on built models, export trained models to TorchScript.

The library is also available via Docker images, one for GPU-based machines and one for CPU-only ones. However, the latter one should only be used for inference and not training, as it is simply too slow.

More information on the library and the Docker images is available from Github:

wai.annotations release 0.5.4

A new release of wai.annotations is out now: 0.5.4

The introduction of image segmentation required more refactoring behind the scenes, which resulted in the 0.5.x release series.

Highlights since the 0.4.0 release:

  • MS COCO format can specify now labels to expect (in case subsets of the dataset do not have all labels present)

  • MS COCO format can now sort the determined labels to avoid ordering issues

  • MS COCO can write the discovered labels to a text file (comma-separated list)

  • Readers/writers no longer assume disk access

  • Macro support for simple command-line substitution

  • New image classification formats: subdir (used by Tensorflow image classification) and ADAMS (label of image present in report)

  • MS COCO/ROI/VGG object detection formats no longer write negative annotations

  • With the strip-annotations plugin, all annotations can be stripped during the conversion (e.g., for generating a dataset only consisting of images)

  • Image segmentation support: PNG with indexed palette for labels, PNG using blue channel for labels, layer-segments format which separates each label into a separate PNG (makes it easier to create subsets of labels)

Paper accepted

Our publication, A comparison of machine learning methods for cross-domain few-shot learning, has been accepted in the 33rd Australasian Joint Conference on Artificial Intelligence, Canberra, Australia.

For more details and downloads, see our publications page.

Keras image segmentation Docker image available

A new Docker image is available for training Keras image segmentation models using a GPU backend. The image is based on TensorFlow 1.14 and Divam Gupta's code, plus additional tools for converting indexed PNGs into RGB ones and continuously processing images with a model.

More information on the Docker image is available from Github:

wai.lazypip release

An initial release of wai.lazypip is out now: 0.0.1

wai.lazypip is a little helper library that can install additional packages within a virtual environment on demand if required modules, functions, attributes or classes are not present. Under the hood, pip is used for installing the additional packages.

ADAMS snapshots now publicly available

The newly available ufdl-frontend-adams modules for ADAMS now have public builds available for download:

As of now, the following workflows can be used for managing a UFDL server instance and datasets:

  • adams-ufdl-core-manage_backend.flow - manages users, teams, projects, licenses

  • adams-ufdl-image-manage_image_classification_datasets.flow - for image classifications datasets

  • adams-ufdl-image-manage_object_detection_datasets.flow - for object detection datasets

  • adams-ufdl-speech-manage_speech_datasets.flow - for speech datasets

NB: In order to utilize these flows, you need to have an instance of the ufdl-backend running, of course.

Github repositories now publicly availably

The (very much work-in-progress) code of the following UFDL repositories is now publicly available: