User Guide
Why can I only view 3 results?
You can also view all results when you are connected from the network of member institutions only. For non-member institutions, we are opening a 1-month free trial version if institution officials apply.
So many results that aren't mine?
References in many bibliographies are sometimes referred to as "Surname, I", so the citations of academics whose Surname and initials are the same may occasionally interfere. This problem is often the case with citation indexes all over the world.
How can I see only citations to my article?
After searching the name of your article, you can see the references to the article you selected as soon as you click on the details section.
 Views 15
 Downloands 4
Investigating Reliability and Stability of Crowdsourcing and Human Computational Outputs based on Artificial Intelligence
2021
Journal:  
Turkish Online Journal of Qualitative Inquiry
Author:  
Abstract:

            Crowdsourcing and human computational outputs provide a scalable and convenient way to perform different human generated tasks use to evaluate the effectiveness of different crowdsourcing platforms based on artificial intelligence. The crowdsourcing generated datasets depend on multiple factors including quality, task reward and accuracy measuring filters. The present was designed to evaluate reliability and stability and consistency of crowdsourcing platforms outputs. We conduct a longitudinal experiment over specified time and two crowdsourcing platforms, Amazon Mechanical Turk and CrowdFlower to demonstrate how the outcomes of reliability varies considerably across platforms whereas repeating tasks over same platform yields consistent results. Different tasks on three different datasets were performed to evaluate the quality of the task interface, employees' experience level supplied by the platform and the evaluation of aquracy of outcomes dependent on the task completion time. The outcomes revealed significant (p<0.05) preeminence of MTurk over CrowdFlower in terms of reliability, accuracy and completion time taken for a task. The tasks replicated on these two platforms showed significant difference in quality based outcomes. The data quality of same repeated tasks over different time was stable in the same platform while it’s was different for different crowdsourcing platform over differ time span.  It was concluded from the findings that by employing standard platform crowdsourcing settings varying order and magnitude of task completion on different platforms can easily be achieved with varying levels of accuracy

Keywords:

0
2021
Author:  
Citation Owners
Information: There is no ciation to this publication.
Similar Articles












Turkish Online Journal of Qualitative Inquiry

Field :   Eğitim Bilimleri

Journal Type :   Uluslararası

Metrics
Article : 4.283
Cite : 1.161
2023 Impact : 0.002
Turkish Online Journal of Qualitative Inquiry