Descripción de los riesgos y desafíos para la integridad académica de aplicaciones generativas de inteligencia artificial
Resumen
Este artículo analiza, desde una perspectiva descriptiva y como apertura de una línea de investigación, el impacto que las tecnologías de inteligencia artificial generativas (IAG) pueden representar para la integridad académica, materializado en la actividad docente y en los procesos de evaluación en la enseñanza universitaria del derecho. El artículo toma como premisa la definición de la integridad académica como un conjunto de valores y sostiene que de la IAG surgen una serie de riesgos que amenazan dichos valores, como la excesiva dependencia y confianza en la IAG, la irrealizabilidad del proyecto pedagógico y la pérdida de competitividad de las instituciones educativas, entre otros. Para minimizar o anular tales riesgos y, de esa forma, impedir que ellos se concreten en afectaciones a la integridad académica, se identifican cuatro medidas de mitigación para ser aplicadas en entornos universitarios.
Referencias bibliográficas
Abbott, R., & Sarch, A. (2019). Punishing Artificial Intelligence: Legal Fiction or Science Fiction. UC Davis Law Review, 53(1), 323-384.
Abbott, R., & Sarch, A. (2020). Punishing artificial intelligence: Legal fiction or science fiction. En S. Deakin & C. Markou (Eds.), Is Law Computable? Critical perspectives on Law and Artificial Intelligence (pp. 177-204). Hart Publishing. https://doi.org/doi.org/10.5040/9781509937097.ch-008
Abdelaal, E., Walpita Gamage, S., & Mills, J. (2019). Artificial Intelligence Is a Tool for Cheating Academic Integrity. AAEE 2019 Annual Conference.
Abukmeil, M., Ferrari, S., Genovese, A., Piuri, V., & Scotti, F. (2021). A Survey of Unsupervised Generative Models for Exploratory Data Analysis and Representation Learning. ACM Computing Surveys, 54(5), 1-40. https://doi.org/https://doi.org/10.1145/3450963
Aitchison, C., & Mowbray, S. (2016). Doctoral Writing Markets: Exploring the Grey Zone. En T. Bretag (Ed.), Handbook of Academic Integrity (pp. 287-301). Springer.
Albahar, M., & Almalki, J. (2019). Deepfakes: Threats and countermeasures systematic review. Journal of Theoretical and Applied Information Technology, 97(22), 3242-3250.
Alibašić, H., & Rose, J. (2019). Fake News in Context: Truth and Untruths. Public Integrity, 21(5), 463-468. https://doi.org/https://doi.org/10.1080/10999922.2019.1622359
Aljanabi, M., & ChatGPT. (2023). ChatGPT: Future Directions and Open possibilities (Editorial Article). Mesopotamian journal of Cybersecurity, 2023, 16-17. https://doi.org/https://doi.org/10.58496/MJCS/2023/003
Aydin, Ö., & Karaarslan, E. (2022). OpenAI ChatGPT Generated Literature Review: Digital Twin in Healthcare. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4308687. https://doi.org/https://dx.doi.org/10.2139/ssrn.4308687
Azaria, A. (2022). ChatGPT Usage and Limitations. HAL Open Science, 1-6. https://doi.org/https://doi.org/10.31219/osf.io/5ue7n
Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., Lovenia, H., Ji, Z., Yu, T., Chung, W., Do, Q. V, Xu, Y., & Fung, P. (2023). A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. ArXiv, abs/2302.0.
Barfield, W., & Pagallo, U. (2020). Advanced Introduction to Law and Artificial Intelligence. Edward Elgar Publishing. https://doi.org/https://doi.org/10.4337/9781789905137
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
Bartz, D. (2023). As ChatGPT’s popularity explodes, U.S. lawmakers take an interest. Reuters. https://www.reuters.com/technology/chatgpts-popularity-explodes-us-lawmakers-take-an-interest-2023-02-13/
Bearman, M., Dawson, P., Ajjawi, R., Tai, J., & Boud, D. (Eds.). (2020). Re-imagining University Assessment in a Digital World. Springer.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/https://doi.org/10.1145/3442188.3445922
Boddington, P. (2017). Towards a Code of Ethics for Artificial Intelligence. Springer. https://doi.org/https://doi.org/10.1007/978-3-319-60648-4
Bommarito, M. J., & Katz, D. M. (2022). GPT Takes the Bar Exam. ArXiv, abs/2212.1. https://doi.org/https://doi.org/10.2139/ssrn.4314839
Bretag (ed.), T. (2016). Handbook of Academic Integrity (T. Bretag (Ed.)). Springer. https://doi.org/https://doi.org/10.1007/978-981-287-098-8
Brown, W., & Fleming, D. H. (2020). Celebrity headjobs: or oozing squid sex with a framed-up leaky {Schar-JØ}. Porn Studies, 7(4), 357-366. https://doi.org/10.1080/23268743.2020.1815570
Bryson, J. J. (2020). The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation. En M. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 3-25). Oxford University Press. https://doi.org/https://doi.org/10.1093/oxfordhb/9780190067397.001.0001
Caldera, E. (2019). “Reject the Evidence of Your Eyes and Ears”: Deepfakes and the Law of Virtual Replicants. Seton Hall Law Review, 50(1), 177-205.
Carabantes Alarcón, D. (2020). Integridad académica y educación superior: nuevos retos en la docencia a distancia. Análisis Carolina, 28, 1-13. https://doi.org/https://doi.org/10.33960/ac_38.2020
Carrasco, J. P., García, E., Sánchez, D. A., Estrella Porter, P. D., De La Puente, L., Navarro, J., & Cerame, A. (2023). ¿Es capaz “ChatGPT” de aprobar el examen MIR de 2022? Implicaciones de la inteligencia artificial en la educación médica en España. Revista Española de Educación Médica, 1, 55-69. https://doi.org/https://doi.org/10.6018/edumed.556511
Cassidy, C. (2023, enero 10). Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays. The Guardian. https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays
Chace, C. (2018). Artificial intelligence and the two singularities. CRC Press. https://doi.org/https://doi.org/10.1201/9781351254465
Chesney, R., & Citron, D. K. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107, 1753-1819. https://doi.org/https://doi.org/10.2139/ssrn.3213954
Chiang, F.-K., Zhu, D., & Yu, W. (2022). A systematic review of academic dishonesty in online learning environments. Journal of Computer Assisted Learning, 38(4), 907-928. https://doi.org/https://doi.org/10.1111/jcal.12656
Choi, J. H., Hickman, K. E., Monahan, A., & Schwarcz, D. (2023). ChatGPT Goes to Law School (White Paper). Minnesota Legal Studies Research Paper, 23-03. https://doi.org/https://dx.doi.org/10.2139/ssrn.4335905
Chomanski, B. (2022). Legitimacy and automated decisions: the moral limits of algocracy. Ethics and Information Technology, 24(3), 1-9. https://doi.org/https://doi.org/10.1007/s10676-022-09647-w
Comas Forgas, R., & Sureda Negre, J. (2008). El intercambio y compra-venta de trabajos académicos a través de Internet. Revista Electrónica De Tecnología Educativa, 26, 1-16. https://doi.org/https://doi.org/10.21556/edutec.2008.26.466
Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 0(0), 1-12. https://doi.org/https://doi.org/10.1080/14703297.2023.2190148
Council of Europe. (2018). Study on the Human Rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications. Council of Europe. https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5
Crawford, J., Cowling, M., & Allen, K.-A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching & Learning Practice, 20(1), 1-19. https://doi.org/https://doi.org/10.53761/1.20.3.02
Dale, R. (2021). GPT-3: What’s it good for? Natural Language Engineering, 27(1), 113-118. https://doi.org/https://doi.org/10.1017/s1351324920000601
Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology, 29(3), 245-268. https://doi.org/https://doi.org/10.1007/s13347-015-0211-11
Danaher, J. (2022). Freedom in an Age of Algocracy. En S. Vallor (Ed.), The Oxford Handbook of Philosophy of Technology (pp. 250-272). Oxford University Press. https://doi.org/https://doi.org/10.1093/oxfordhb/9780190851187.013.16
Dawson, P. (2021). Defending Assessment Security in a Digital World Preventing E-Cheating and Supporting Academic Integrity in Higher Education. Routledge. https://doi.org/https://doi.org/10.4324/9780429324178
Demre. (2022). Modelo de Prueba de Historia y Ciencias Sociales. https://demre.cl/publicaciones/2023/2023-22-03-31-modelo-historia-csociales
Díez Ripollés, J. L. (2003). El Derecho penal simbólico y los efectos de la pena. En L. Arroyo Zapatero, U. Neumann, & A. Nieto Marín (Eds.), Crítica y justificación del Derecho penal en el cambio de siglo (pp. 147-172). Ediciones de la Universidad de Castilla-La Mancha.
Dignum, V. (2019). Responsible artificial intelligence. How to develop and use AI in a responsible way. Springer. https://doi.org/https://doi.org/10.1007/978-3-030-30371-6
Donath, J. (2020). Ethical Issues in Our Relationship with Artificial Entities. En M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 53-73). Oxford University Press. https://doi.org/https://doi.org/10.1093/oxfordhb/9780190067397.013.3
Donson, F., & O’Sullivan, C. (2017). Building blocks or stumbling block? Teaching actus reus and mens rea in criminal law. En K. Gledhill & B. Living (Eds.), The Teaching of Criminal Law. The pedagogical imperatives (pp. 21-33). Routledge.
Dowling, M., & Lucey, B. (2023). ChatGPT for (Finance) research: The Bananarama Conjecture. Finance Research Letters, 103662. https://doi.org/https://doi.org/10.1016/j.frl.2023.103662
Echavarría, M. A. (2014). ¿Qué es el plagio? Propuesta conceptual del plagio punible. Revista Facultad de Derecho y Ciencias Políticas, 44(121), 699-720.
Echavarría, M. A. (2016). El delito de plagio: una propuesta de regulación penal de la infracción al derecho de autor. Cuadernos de Derecho Penal, 15, 85-101. https://doi.org/https://doi.org/10.22518/20271743.577
Eitel-Porter, R. (2021). Beyond the promise: implementing ethical AI. AI and Ethics, 1(1), 73-80. https://doi.org/https://doi.org/10.1007/s43681-020-00011-6
Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060. https://doi.org/https://doi.org/10.1016/j.jrt.2023.100060
Faraldo-Cabana, P. (2015). Who Dares Fine a Murderer? The Changing Meaning of Money and Fines in Western European Criminal Systems. Social & Legal Studies, 25(4), 489-507. https://doi.org/https://doi.org/10.1177/0964663915618545
Faraldo-Cabana, P. (2018). Research excellence and Anglophone dominance: The case of law, criminology and social science. En K. Carrington, R. Hogg, J. Scott, & M. Sozzo (Eds.), The Palgrave Handbook of Criminology and the Global South (Vols. 163-182). Palgrave. https://doi.org/https://doi.org/10.1007/978-3-319-65021-0_9
Faraldo-Cabana, P. (2019). Consecuencias imprevistas de la dominación anglófona en las ciencias sociales y jurídicas. RES. Revista Española de Sociología, 28(1), 45-60. https://doi.org/https://doi.org/10.22325/fes/res.2018.57
Faraldo-Cabana, P., & Lamela, C. (2021). How international are the top international journals of criminology and criminal justice? European Journal on Criminal Policy and Research, 27(2), 151-174. https://doi.org/https://doi.org/10.1007/s10610-019-09426-2
Felländer, A., Rebane, J., Larsson, S., Wiggberg, M., & Heintz, F. (2022). Achieving a Data-Driven Risk Assessment Methodology for Ethical AI. Digital Society, 1(2), 13. https://doi.org/https://doi.org/10.1007/s44206-022-00016-0
Fierens, M., Rossello, S., & Wauters, E. (2021). Setting the scene: On AI ethics and regulation. En J. De Bruyne & C. Vanleenhove (Eds.), Artificial Intelligence and the Law (pp. 49-72). Intersentia.
Fishman, T. ‘Teddi’. (2016). Academic Integrity as an Educational Concept, Concern, and Movement in US Institutions of Higher Learning BT - Handbook of Academic Integrity. En T. Bretag (Ed.), Handbook of Academic Integrity (pp. 7-21). Springer Singapore. https://doi.org/https://doi.org/10.1007/978-981-287-098-8_1
Floridi, L. (2023). AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models. Philosophy & Technology, 36(1), 15. https://doi.org/https://doi.org/10.2139/ssrn.4358789
Frieder, S., Pinchetti, L., Griffiths, R.-R., Salvatori, T., Lukasiewicz, T., Petersen, P. C., Chevalier, A., & Berner, J. J. (2023). Mathematical Capabilities of ChatGPT. ArXiv, abs/2301.1.
García-Peñalvo, F. (2023). La percepción de la Inteligencia Artificial en contextos educativos tras el lanzamiento de ChatGPT: disrupción o pánico. Education in the Knowledge Society, 24, e31279. https://doi.org/https://doi.org/10.14201/eks.31279
García-Ull, F. (2021). Deepfakes: The next challenge in fake news detection. Anàlisi, 64, 103-120. https://doi.org/https://doi.org/10.5565/rev/analisi.3378
García-Villegas, M., Franco-Pérez, N., & Cortés-Arbeláez, A. (2016). Perspectives on Academic Integrity in Colombia and Latin America. En T. Bretag (Ed.), Handbook of oAcademic Integrity (pp. 161-180). Springer. https://doi.org/https://doi.org/10.1007/978-981-287-098-8_10
Gordijn, B., & Have, H. ten. (2023). ChatGPT: evolution or revolution? Medicine, Health Care and Philosophy. https://doi.org/https://doi.org/10.1007/s11019-023-10136-0
Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT is not all you need. A State of the Art Review of large Generative AI models. ArXiv, abs/2301.0.
Grijalva, T. C., Nowell, C., & Kerkvliet, J. (2006). Academic Honesty and Online Courses. College Student Journal, 40(1), 180-185.
Gritsenko, D., & Wood, M. (2020). Algorithmic governance: A modes of governance approach. Regulation and Governance. https://doi.org/https://doi.org/10.1111/rego.12367
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/https://doi.org/10.1007/s11023-020-09517-8
Hagendorff, T. (2022). Blind spots in AI ethics. AI and Ethics, 2(4), 851-867. https://doi.org/https://doi.org/10.1007/s43681-021-00122-8
Harris, D. (2019). Deepfakes: False Pornography Is Here and the Law Cannot Protect You. Duke Law & Technology Review, 17, 99-127.
Heaven, D. (2023). GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why. MIT Technology Review. https://www.technologyreview.com/2023/03/14/1069823/gpt-4-is-bigger-and-better-chatgpt-openai/?utm_source=engagement_email&utm_medium=email&utm_campaign=wklysun&utm_content=03.19.23.nonsubs_eng&mc_cid=d42ec9a242&mc_eid=a0446d3f11
Helberger, N., & Diakopoulos, N. (2023). ChatGPT and the AI Act. Internet Policy Review, 12(1). https://doi.org/https://doi.org/10.14763/2023.1.1682
Henry, J. V., & Oliver, M. (2022). Who Will Watch the Watchmen? The Ethico-political Arrangements of Algorithmic Proctoring for Academic Integrity. Postdigital Science and Education, 4(2), 330-353. https://doi.org/https://doi.org/10.1007/s42438-021-00273-1
Hermansson, H., & Hansson, S. O. (2007). A three-party model for ethical risk analysis. Risk Management, 9(3), 129-144. https://doi.org/https://doi.org/10.1057/palgrave.rm.8250028
Hern, A. (2023). TechScape: Will Meta’s massive leak democratise AI – and at what cost? The Guardian. https://www.theguardian.com/technology/2023/mar/07/techscape-meta-leak-llama-chatgpt-ai-crossroads
Holden, O. L., Norris, M. E., & Kuhlmeier, V. A. (2021). Academic Integrity in Online Assessment: A Research Review. Frontiers in Education, 6, 639814. https://doi.org/https://doi.org/10.3389/feduc.2021.639814
Höppner, T., & Streatfeild, L. (2023). ChatGPT, Bard & Co.: an introduction to AI for competition and regulatory lawyers. Hausfeld. https://www.hausfeld.com/en-us/what-we-think/competition-bulletin/chatgpt-bard-co-an-introduction-to-ai-for-competition-and-regulatory-lawyers/
Hu, L. (2022). Generative AI and Future. GAN, GPT-3, DALL·E 2, and what’s next. Medium. https://pub.towardsai.net/generative-ai-and-future-c3b1695876f2
International Center for Academic Integrity. (2021). The Fundamental Values of Academic Integrity (3a ed.). https://academicintegrity.org/images/pdfs/20019_ICAI-Fundamental-Values_R12.pdf
Islam, I., & Islam, M. N. (2023). Opportunities and Challenges of ChatGPT in Academia: A Conceptual Analysis. Authorea, Preprint, 1-9. https://doi.org/https://doi.org/10.22541/au.167712329.97543109/v1
Jabotinsky, H. Y., & Sarel, R. (2023). Co-authoring with an AI? Ethical dilemmas and Artificial Intelligence (working paper). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4303959
Jelenic, M., & Kennette, L. N. (2022). Cheating: It depends how you define it. Canadian Perspectives on Academic Integrity, 5(2), 16-33. https://doi.org/https://doi.org/10.11575/cpai.v5i2.75649 ISSN
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12), 1-38. https://doi.org/https://doi.org/10.1145/3571730
Jiao, W., Wang, W., Huang, J., Wang, X., & Tu, Z. (2023). Is ChatGPT A Good Translator? A Preliminary Study. ArXiv, abs/2301.0. https://doi.org/https://doi.org/10.48550/arXiv.2301.08745
Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Preprint. https://doi.org/https://doi.org/10.35542/osf.io/5er8f
Kirchner, J. H., Ahmad, L., Aaronson, S., & Leike, J. (2023). New AI classifier for indicating AI-written text. OpenAI.
Kleebayoon, A., & Wiwanitkit, V. (2023). Artificial Intelligence, Chatbots, Plagiarism and Basic Honesty: Comment. Cellular and Molecular Bioengineering. https://doi.org/https://doi.org/10.1007/s12195-023-00759-x
Lancaster, T. (2021). Academic Dishonesty or Academic Integrity? Using Natural Language Processing (NLP) Techniques to Investigate Positive Integrity in Academic Integrity Research. Journal of Academic Ethics, 19(3), 363-383. https://doi.org/https://doi.org/10.1007/s10805-021-09422-4
Li, H., Moon, J. T., Purkayastha, S., Celi, L. A., Trivedi, H., & Gichoya, J. W. (2023). Ethics of large language models in medicine and medical research. www.thelancet.com/digital-health, 5, 333-335. https://doi.org/https://doi.org/10.1016/S2589-7500(23)00083-3
Li, W., Wu, W., Chen, M., Liu, J., Xiao, X., & Wu, H. (2022). Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods. ArXiv, abs/2203.0.
Malle, B. F., Scheutz, M., & Austerweil, J. L. (2017). Networks of social and moral norms in human and robot agents. En M. I. Aldinhas Ferreira, E. E. Kadar, J. Silva Sequeira, G. S. Virk, & M. O. Tokhi (Eds.), A world with robots. International Conference on Robot Ethics: ICRE 2015 (pp. 3-17). Springer. https://doi.org/https://doi.org/10.1007/978-3-319-46667-5_1
Melo, V. E. (2022). Fake News. Thomson Reuters La Ley.
Microsoft. (2023). Introducing Microsoft 365 Copilot – your copilot for work. Microsoft. https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/
Milmo, D. (2023a). ChatGPT reaches 100 million users two months after launch. The Guardian. https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app
Milmo, D. (2023b). Google AI chatbot Bard sends shares plummeting after it gives wrong answer. The Guardian2. https://www.theguardian.com/technology/2023/feb/09/google-ai-chatbot-bard-error-sends-shares-plummeting-in-battle-with-microsoft
Miquel-Vergés, J. (2022). Nuevas tecnologías en la educación superior virtual. Las tecnologías del ultrafalso y de la traducción cara a cara. Human Review. Revista Internacional de Humanidades, 12(4), 1-20. https://doi.org/https://doi.org/10.37467/revhuman.v11.3967
Moya, B. A., Eaton, S. E., Pethrick, H., Hayden, K. A., Brennan, R., Wiens, J., McDermott, B., & Lesage, J. (2023). Academic Integrity and Artificial Intelligence in Higher Education Contexts: A Rapid Scoping Review Protocol. Canadian Perspectives on Academic Integrity, 5(2), 59-75.
Newton, P. M. (2018). How Common Is Commercial Contract Cheating in Higher Education and Is It Increasing? A Systematic Review. Frontiers in Education, 3, 1-18. https://doi.org/https://doi.org/10.3389/feduc.2018.00067
Newton, P. M., & Lang, C. (2016). Custom Essay Writers, Freelancers, and Other Paid Third Parties. En T. Bretag (Ed.), Handbook of Academic Integrity (pp. 249-271). Springer. https://doi.org/https://doi.org/10.1007/978-981-287-098-8_38
OMPI. (1980). Glosario de derecho de autor y derechos conexos. Organización Mundial de Propiedad Intelectual.
OpenAI. (2023). GPT-4 Technical Report. ArXiv, abs/2303.0. https://api.semanticscholar.org/CorpusID:257532815
Paul, R. (2005). The state of critical thinking today. New Directions for Community Colleges, 2005(130), 27-38. https://doi.org/https://doi.org/10.1002/cc.193
Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post pandemic era: ChatGPT and beyond. Journal of University Teaching & Learning Practice, 20(2), 1-24. https://doi.org/https://doi.org/10.53761/1.20.02.07
Popova, M. (2020). Reading out of context: pornographic deepfakes, celebrity and intimacy. Porn Studies, 7(4), 367-381. https://doi.org/10.1080/23268743.2019.1675090
Powers, T. M., & Ganascia, J.-G. (2020). The Ethics of the Ethics of AI. En M. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI (pp. 27-51). Oxford University Press. https://doi.org/https://doi.org/10.1093/oxfordhb/9780190067397.013.2
Qin, C., Zhang, A., Zhang, Z., Chen, J., Yasunaga, M., & Yang, D. (2023). Is ChatGPT a General-Purpose Natural Language Processing Task Solver? ArXiv, abs/2302.0.
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving Language Understanding by Generative Pre-Training. OpenAI. https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
Rahaman, S., Ahsan, T., Anjum, N., Rahman, M., & Rahman, N. (2023). The AI Race is On! Google’s Bard and OpenAI’s ChatGPT Head to Head: An Opinion Article. https://ssrn.com/abstract=4351785. https://doi.org/http://dx.doi.org/10.2139/ssrn.4351785
Raj, R. J. S., Babu, S. A., Jegatheesan, A., & Xavier, V. M. A. (2022). A GAN-Based Triplet FaceNet Detection Algorithm Using Deep Face Recognition for Autism Child. En J. D. Peter, A. H. Alavi, & S. L. Fernandes (Eds.), Disruptive Technologies for Big Data and Cloud Applications. Proceedings of ICBDCC 2021 (pp. 177-187). Springer.
Rebuffel, C., Roberti, M., Soulier, L., Scoutheeten, G., Cancelliere, R., & Gallinari, P. (2022). Controlling hallucinations at word level in data-to-text generation. Data Mining and Knowledge Discovery, 36(1), 318-354. https://doi.org/10.1007/s10618-021-00801-4
Riley, S., & Alvarez, L. C. (2023). ChatGPT, Friend or Foe in the Classroom? https://otl.du.edu/chatgpt-friend-or-foe-in-the-classroom/
Roe, J., & Perkins, M. (2022). What are Automated Paraphrasing Tools and how do we address them? A review of a growing threat to academic integrity. International Journal for Educational Integrity, 18(1), 15. https://doi.org/https://doi.org/10.1007/s40979-022-00109-w
Rogerson, A. M., & McCarthy, G. (2017). Using Internet based paraphrasing tools: Original work, patchwriting or facilitated plagiarism? International Journal for Educational Integrity, 13(1), 2. https://doi.org/https://doi.org/10.1007/s40979-016-0013-y
Rojas Chavarro, M., & Olarte Collazos, J. (2010). Plagio en el ámbito académico. Revista Colombiana de Anestesiología, 39(4), 537-538. http://www.scielo.org.co/pdf/rca/v38n4/v38n4a10.pdf
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning & Teaching, 6(1), 1-22. https://doi.org/https://doi.org/10.37074/jalt.2023.6.1.9
Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4a ed.). Pearson.
Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. Journal of Database Management (JDM), 31(2), 74-87. https://doi.org/https://doi.org/10.4018/jdm.2020040105
Simó Soler, E. (2023). Retos jurídicos derivados de la inteligencia articial generativa. Arxius, 2. https://doi.org/https://doi.org/10.31009/InDret.2023.i2.11
Soule, D., Puram, A., Westerman, G., & Bonnet, D. (2016). Becoming a Digital Organization: The Journey to Digital Dexterity (Working Paper) (N.o 301). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2697688
Sparrow, J. (2022). ‘Full-on robot writing’: the artificial intelligence challenge facing universities. The Guardian. https://www.theguardian.com/australia-news/2022/nov/19/full-on-robot-writing-the-artificial-intelligence-challenge-facing-universities
Spivak, B., & Shepherd, S. (2021). Ethics, Artificial Intelligence, and Risk Assessment. The journal of the American Academy of Psychiatry and the Law, 49. https://doi.org/10.29158/JAAPL.210066-21
Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, Click, and Cheat: Frequency and Type of Academic Dishonesty in the Virtual Classroom. Online Journal of Distance Learning Administration, 12(3).
Susnjak, T. (2022). ChatGPT: The End of Online Exam Integrity? ArXiv, abs/2212.0.
Talan, T., & Kalinkara, Y. (2023). The Role of Artificial Intelligence in Higher Education: ChatGPT Assessment for Anatomy Course. International Journal of Management Information Systems and Computer Science, 7(1), 33-40. https://doi.org/http://dx.doi.org/10.33461/uybisbbd.1244777
Taylor, L. (2023). Colombian judge says he used ChatGPT in ruling. The Guardian. https://www.theguardian.com/technology/2023/feb/03/colombia-judge-chatgpt-ruling
Tiku, N. (2022). The Google engineer who thinks the company’s AI has come to life. The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., & Ortega-Garcia, J. (2022). An Introduction to Digital Face Manipulation. En C. Rathgeb, R. Vera-Rodriguez, R. Tolosana, & C. Busch (Eds.), Handbook of Digital Face Manipulation and Detection. From DeepFakes to Morphing Attacks (pp. 3-26). Springer.
van der Sloot, B., & Wagensveld, Y. (2022). Deepfakes: regulatory challenges for the synthetic society. Computer Law & Security Review, 46, 105716. https://doi.org/https://doi.org/10.1016/j.clsr.2022.105716
Vanem, E. (2012). Ethics and fundamental principles of risk acceptance criteria. Safety Science, 50, 958-967. https://doi.org/https://doi.org/10.1016/j.ssci.2011.12.030
Weidinger, L., Mellor, J. F. J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S. M., Hawkins, W. T., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L. A., … Gabriel, I. (2021). Ethical and social risks of harm from Language Models. ArXiv, abs/2112.0. https://doi.org/https://doi.org/10.48550/arXiv.2112.04359
Whitby, B. (2011). On Computable Morality An Examination of Machines as Moral Advisors. En M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 138-150). Cambridge University Press.
Winter, R., & Salter, A. (2020). DeepFakes: uncovering hardcore open source on GitHub. Porn Studies, 7(4), 382-397. https://doi.org/https://doi.org/10.1080/23268743.2019.1642794
Xiao, Y., & Wang, W. Y. (2021). On Hallucination and Predictive Uncertainty in Conditional Language Generation. ArXiv, abs/2103.1.
Yan, L., Sha, L., Zhao, L., Li, Y.-S., Maldonado, R. M., Chen, G., Li, X., Jin, Y., & Gavsević, D. (2023). Practical and Ethical Challenges of Large Language Models in Education: A Systematic Literature Review. ArXiv, abs/2303.1. https://doi.org/https://doi.org/10.48550/arXiv.2303.13379
Yapo, A., & Weiss, J. (2018). Ethical Implications of Bias in Machine Learning. Proceedings of the Annual Hawaii International Conference on System Sciences. https://doi.org/https://doi.org/10.24251/hicss.2018.668
Yeung, K. (2019). Responsibility and AI. Council of Europe. https://rm.coe.int/responsability-and-ai-en/168097d9c5
Zhang, M., & Li, J. (2021). A commentary of GPT-3 in MIT Technology Review 2021. Fundamental Research, 1(6), 831-833. https://doi.org/https://doi.org/10.1016/j.fmre.2021.11.011
Ley 2213, Por medio de la cual se establece la vigencia permanente del decreto legislativo 806 de 2020 y se adoptan medidas para implementar las tecnologías de la información y las comunicaciones en las actuaciones judiciales, agilizar los procesos judiciales y flexibilizar la atención a los usuarios del servicio de justicia y se dictan otras disposiciones (Congreso de Colombia 13 de junio de 2022).
Carlos Antonio Urquieta Salazar c/ Margarita Leonor Cid Lizondo, Sentencia Rol 2595-2009 (Octavo Juzgado de Garantía de Santiago [Chile], 27 de mayo de 2011). https://juris.pjud.cl/busqueda/u?q7av
Salvador Espitia Chávez c/ Salud Total E.P.S, Sentencia N.° 32 (Juzgado 1.° Laboral del Circuito de Cartagena [Colombia], 30 de enero de 2023).
Sentencia Rol 2595-2009, de 27 de mayo de 2011, (Octavo Juzgado de Garantía de Santiago [Chile]). https://juris.pjud.cl/busqueda/u?q7av
Descargas
Derechos de autor 2023 Roberto Navarro-Dolmestch
Esta obra está bajo licencia internacional Creative Commons Reconocimiento 4.0.