CAGS
npfl138.datasets.cags.CAGS
Source code in npfl138/datasets/cags.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|
LABEL_NAMES
class-attribute
instance-attribute
LABEL_NAMES: list[str] = [
"Abyssinian",
"Bengal",
"Bombay",
"British_Shorthair",
"Egyptian_Mau",
"Maine_Coon",
"Russian_Blue",
"Siamese",
"Sphynx",
"american_bulldog",
"american_pit_bull_terrier",
"basset_hound",
"beagle",
"boxer",
"chihuahua",
"english_cocker_spaniel",
"english_setter",
"german_shorthaired",
"great_pyrenees",
"havanese",
"japanese_chin",
"keeshond",
"leonberger",
"miniature_pinscher",
"newfoundland",
"pomeranian",
"pug",
"saint_bernard",
"samoyed",
"scottish_terrier",
"shiba_inu",
"staffordshire_bull_terrier",
"wheaten_terrier",
"yorkshire_terrier",
]
The list of label names in the dataset.
Element
class-attribute
instance-attribute
The type of a single dataset element.
Dataset
Bases: TFRecordDataset
Source code in npfl138/datasets/cags.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
|
__len__
__len__() -> int
Return the number of elements in the dataset.
Source code in npfl138/datasets/cags.py
51 52 53 |
|
__init__
__init__(decode_on_demand: bool = False) -> None
Load the CAGS dataset, downloading it if necessary.
Source code in npfl138/datasets/cags.py
70 71 72 73 74 75 76 77 78 79 |
|
MaskIoUMetric
Bases: MaskIoU
The MaskIoUMetric is a metric for evaluating the segmentation task.
Source code in npfl138/datasets/cags.py
89 90 91 92 |
|
evaluate_classification
staticmethod
Evaluate the predictions
labels against the gold dataset.
Returns:
-
accurracy
(float
) –The average accuracy of the predicted labels in percentages.
Source code in npfl138/datasets/cags.py
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
|
evaluate_classification_file
staticmethod
Evaluate the file with label predictions against the gold dataset.
Returns:
-
accurracy
(float
) –The average accuracy of the predicted labels in percentages.
Source code in npfl138/datasets/cags.py
111 112 113 114 115 116 117 118 119 |
|
evaluate_segmentation
staticmethod
Evaluate the predictions
masks against the gold dataset.
Returns:
-
iou
(float
) –The average iou of the predicted masks in percentages.
Source code in npfl138/datasets/cags.py
121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
|
evaluate_segmentation_file
staticmethod
Evaluate the file with mask predictions against the gold dataset.
Returns:
-
iou
(float
) –The average iou of the predicted masks in percentages.
Source code in npfl138/datasets/cags.py
154 155 156 157 158 159 160 161 |
|
visualize
staticmethod
Visualize the given image plus predicted mask.
Parameters:
-
image
(Tensor
) –A torch.Tensor of shape [C, H, W] with dtype torch.uint8
-
mask
(Tensor
) –A torch.Tensor with H * W float values in [0, 1]
-
show
(bool
) –controls whether to show the figure or return it: if
True
, the figure is shown usingplt.show()
; ifFalse
, theplt.Figure
instance is returned; it can be saved to TensorBoard using a theadd_figure
method of aSummaryWriter
.
Source code in npfl138/datasets/cags.py
163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
|