Scaling up AI Model Deployments with the Limited and Imperfectly Labeled Data
This talk has been presented at Berkeley Deep Drive Community Meeting.
Abstract: How to train AI models with a limited budget on labeling? How can heuristic labeling of examples influence model accuracy? How will biases and noisy labels in the train datasets affect the robustness of the deployed AI models? In this talk, I will introduce the challenges Panasonic faces during deployments of AI models and how our research projects address these obstacles. Specifically, I will present our new Home Action Genome dataset and our recent papers about robust active learning and AutoAugment methods. As a takeaway, I would like to emphasize the need for academic projects to go beyond standard dataset setups when developing new AI models.