The Facebook owner has also released a dataset, Segment Anything 1-Billion mask dataset (SA-1B), built for training general-purpose object segmentation models from open world images

Meta_i_Almedalen

Meta announces an AI model for image segmentation called Segment Anything Model. (Credit: VisbyStar/Wikimedia Commons)

Meta has announced an artificial intelligence (AI) model for image segmentation called Segment Anything Model (SAM) to identify which image pixels belong to an object.

The Facebook owner has also released a dataset, Segment Anything 1-Billion mask dataset (SA-1B), developed for training general-purpose object segmentation models from open world images.

According to Meta, SA-1B is the largest ever segmentation dataset.

Segment Anything Model is expected to create masks for every object in any image or video, including the objects and image types that the AI model had not come across its training.

Meta stated that the image segmentation AI model could be leveraged to drive applications in various domains that need identifying and segmenting any item in any image.

The new AI image segmenting model is anticipated to allow the selection of an object based on a user’s gaze and then bringing it into 3D in the augmented reality (AR) and virtual reality (VR) domain.

It is trained on a dataset of nearly 11 million licensed images and more than one billion segmentation masks. The AI model is also said to have a robust zero-shot performance on a variety of segmentation tasks.

SAM will provide a valid segmentation mask for any prompt including foreground or background points, freeform text, a rough box or mask, or any data regarding the item to be segmented, said Meta.

Meta stated: “Our final dataset includes more than 1.1 billion segmentation masks collected on about 11 million licensed and privacy-preserving images.

“SA-1B has 400x more masks than any existing segmentation dataset, and as verified by human evaluation studies, the masks are of high quality and diversity, and in some cases even comparable in quality to masks from the previous much smaller, fully manually annotated datasets.”