{"id":1780,"date":"2020-12-05T12:28:29","date_gmt":"2020-12-05T05:28:29","guid":{"rendered":"http:\/\/research.binus.ac.id\/airdc\/?p=1780"},"modified":"2021-09-01T12:34:17","modified_gmt":"2021-09-01T05:34:17","slug":"annotation-system-to-build-cyberbullying-and-hate-speech-detection-model-training-dataset","status":"publish","type":"post","link":"https:\/\/research.binus.ac.id\/airdc\/2020\/12\/annotation-system-to-build-cyberbullying-and-hate-speech-detection-model-training-dataset\/","title":{"rendered":"Annotation System to Build Cyberbullying and Hate Speech Detection Model Training Dataset"},"content":{"rendered":"<p style=\"text-align: justify\">During 2019, Indonesian people experienced the election period which triggers many hate speech and cyberbullying cases on Twitter. A detection tool to screen social media data can be used to avoid the spread of negative content. A supervised machine learning approach can be used to build this detection tool. However, it needs thousands of labeled data to develop the machine learning model with high accuracy. In the current study phase, an annotation system was proposed to help the researchers to label Twitter raw data collected in the previous phase. An open-source tool was utilized to build a user-friendly web-based system. Three main features are proposed including multi labels annotation, multi-users validation, and dashboard page. This system can help the annotators to perform labeling task for thousands of text data.<\/p>\n<p>6th International ACM In-Cooperation HCI and UX<\/p>\n<div class=\"nova-l-flex__item research-detail-author-list__item research-detail-author-list__item--has-image\">\n<div class=\"nova-v-person-list-item has-image\" data-testid=\"research-header-author-item\">\n<div class=\"nova-l-flex__item nova-l-flex nova-l-flex--gutter-xs nova-l-flex--direction-row@s-up nova-l-flex--align-items-stretch@s-up nova-l-flex--justify-content-flex-start@s-up nova-l-flex--wrap-nowrap@s-up\">\n<div class=\"nova-l-flex__item nova-l-flex__item--grow nova-v-person-list-item__body\">\n<div class=\"nova-v-person-list-item__stack nova-v-person-list-item__stack--gutter-s\">\n<div class=\"nova-v-person-list-item__stack-item\">\n<div class=\"nova-v-person-list-item__align\">\n<div class=\"nova-v-person-list-item__align-content\">\n<div><strong>Trisna Febriana and <span style=\"font-size: 14px\">Arif Budiarto<\/span><\/strong><\/div>\n<div class=\"nova-e-text nova-e-text--size-m nova-e-text--family-sans-serif nova-e-text--spacing-none nova-e-text--color-inherit nova-e-text--clamp nova-v-person-list-item__title\"><span style=\"font-family: Open Sans Bold, Helvetica, Arial, sans-serif\"><b>\u00a0<\/b><\/span><\/div>\n<\/div>\n<\/div>\n<div><a href=\"https:\/\/www.researchgate.net\/publication\/346644078_Annotation_System_to_Build_Cyberbullying_and_Hate_Speech_Detection_Model_Training_Dataset\">Read Full Paper<\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>During 2019, Indonesian people experienced the election period which triggers many hate speech and cyberbullying cases on Twitter. A detection tool to screen social media data can be used to avoid the spread of negative content. A supervised machine learning approach can be used to build this detection tool. However, it needs thousands of labeled [&hellip;]<\/p>\n","protected":false},"author":14,"featured_media":2014,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[16],"tags":[],"class_list":["post-1780","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-publications"],"_links":{"self":[{"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/posts\/1780","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/comments?post=1780"}],"version-history":[{"count":2,"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/posts\/1780\/revisions"}],"predecessor-version":[{"id":1906,"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/posts\/1780\/revisions\/1906"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/media\/2014"}],"wp:attachment":[{"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/media?parent=1780"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/categories?post=1780"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/research.binus.ac.id\/airdc\/wp-json\/wp\/v2\/tags?post=1780"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}