Skip to content

About data augmentation  #41

@BaophanN

Description

@BaophanN

Dear author, in the file prepare_data.py, there is this part:

        for angle in range(5, 360, 5):#
            rotated_marks = rotate_centralized_marks(centralied_marks, angle)
            if boundary_check(rotated_marks) and overlap_check(rotated_marks):
                rotated_image = rotate_image(image, angle)
                output_name = os.path.join(
                    args.output_directory, name + '_' + str(angle))
                write_image_and_label(
                    output_name, rotated_image, rotated_marks, name_list)

As I understand, this directly generate rotated version of training data and store inside the annotations folder. So this means the model should be robust to rotation when the vehicle turns,

As of now, I want to make the model robust to scale also, the ps 2.0 dataset is 600x600 in size corresponding to 10m x 10m in real life. However, my input data config is 800x800 corresponding to 16x16m, which means the visible field is larger than ps 2.0. I have tried to resize and crop my input to fit the original data config and it works fine. However, I want to do some augmentations on original training data to make the model roboust to scale. Can you give me some details on how to implement this? The technical problem is with the above loop in prepare_dataset file, it took too long to generate annotations for training

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions