DOI: https://doi.org/10.1007/s10676-024-09775-5
تاريخ النشر: 2024-06-01
تشات جي بي تي هراء
© المؤلف(ون) 2024
الملخص
مؤخراً، كان هناك اهتمام كبير في نماذج اللغة الكبيرة: أنظمة التعلم الآلي التي تنتج نصوصاً وحوارات تشبه الإنسان. لقد كانت تطبيقات هذه الأنظمة تعاني من عدم دقة مستمرة في مخرجاتها؛ وغالباً ما تُسمى هذه بـ “هلوسات الذكاء الاصطناعي”. نحن نرى أن هذه الأكاذيب، والنشاط العام لنماذج اللغة الكبيرة، يُفهم بشكل أفضل على أنه هراء بالمعنى الذي استكشفه فرانكفورت (عن الهراء، جامعة برينستون، 2005): النماذج غير مبالية بطريقة مهمة بحقيقة مخرجاتها. نحن نميز طريقتين يمكن القول إن النماذج تعتبر هراءً، ون argue أن لديها بوضوح على الأقل واحدة من هذه التعريفات. نحن نضيف أيضاً أن وصف التمثيلات الخاطئة للذكاء الاصطناعي بأنها هراء هو طريقة أكثر فائدة ودقة في التنبؤ ومناقشة سلوك هذه الأنظمة.
مقدمة
تعطي الانطباع بأن هذا هو ما يفعلونه. نقترح أن هذا قريب جدًا على الأقل من طريقة واحدة يتحدث بها فرانكفورت عن الهراء. نحن نميز بين نوعين من الهراء، اللذين نسميهما ‘هراء صلب’ و’هراء ناعم’، حيث يتطلب الأول محاولة نشطة لخداع القارئ أو المستمع بشأن طبيعة المشروع، بينما يتطلب الثاني فقط عدم الاكتراث بالحقائق. نجادل بأنه على الأقل، فإن مخرجات نماذج اللغة الكبيرة مثل ChatGPT هي هراء ناعم: هراء – أي خطاب أو نص يتم إنتاجه دون الاكتراث بصدقه – يتم إنتاجه دون أي نية لخداع الجمهور بشأن موقف المتحدث تجاه الحقيقة. نقترح أيضًا، بشكل أكثر جدلًا، أن ChatGPT قد ينتج بالفعل هراءً صلبًا: إذا نظرنا إليه على أنه يمتلك نوايا (على سبيل المثال، بفضل كيفية تصميمه)، فإن حقيقة أنه مصمم ليعطي انطباعًا بالاهتمام بالحقائق تؤهله كمحاولة لخداع الجمهور بشأن أهدافه أو غاياته أو أجندته. لذا، مع التحذير من أن النوع المحدد من الهراء الذي ينتجه ChatGPT يعتمد على وجهات نظر معينة حول العقل أو المعنى، نستنتج أنه من المناسب التحدث عن النصوص التي ينتجها ChatGPT على أنها هراء، ونشير إلى لماذا يهم أنه – بدلاً من التفكير في ادعاءاته غير الصحيحة كأكاذيب أو هلوسات – نعتبرها هراءً.
ما هو شات جي بي تي؟
الأشياء. تهدف نماذج اللغة الكبيرة ببساطة إلى تقليد الكلام أو الكتابة البشرية. وهذا يعني أن هدفها الأساسي، بقدر ما يكون لها هدف، هو إنتاج نص يشبه النص البشري. وتقوم بذلك من خلال تقدير احتمال ظهور كلمة معينة في المرة التالية، بناءً على النص الذي جاء من قبل.
كما قال القاضي ب. كيفن كاستل، أنتج ChatGPT نصًا مليئًا بـ “قرارات قضائية مزيفة، مع اقتباسات مزيفة واستشهادات داخلية مزيفة”. وبالمثل، عندما اختبر باحثو علوم الكمبيوتر قدرة ChatGPT على المساعدة في الكتابة الأكاديمية، وجدوا أنه قادر على إنتاج نص شامل بشكل مدهش وأحيانًا حتى دقيق حول المواضيع البيولوجية عند إعطائه الطلبات الصحيحة. ولكن عندما طُلب منه تقديم أدلة على ادعاءاته، “قدم خمسة مراجع تعود إلى أوائل العقد الأول من القرن الحادي والعشرين. لم توجد أي من عناوين الأوراق المقدمة، وكانت جميع معرفات PubMed (PMIDs) المقدمة لأوراق غير ذات صلة” (Alkaissi و McFarland، 2023). يمكن أن تتراكم هذه الأخطاء: عندما يُطلب من نموذج اللغة تقديم دليل على أو تفسير أعمق لادعاء خاطئ، نادرًا ما يتحقق من نفسه؛ بدلاً من ذلك، ينتج بثقة مزيدًا من الادعاءات الكاذبة ولكنها تبدو طبيعية (Zhang et al. 2023). تُعرف مشكلة الدقة لنماذج اللغة الكبيرة والذكاء الاصطناعي التوليدي الآخر غالبًا بمشكلة “هلوسة الذكاء الاصطناعي”: يبدو أن الدردشة تتخيل مصادر وحقائق غير موجودة. تُعرف هذه الأخطاء بـ “الهلوسات” في كل من السياقات التقنية (OpenAI، 2023) والشعبية (Weise & Metz، 2023).
بتوليده. لكن هذا سيجعلها تنتج نصًا مشابهًا للنص في قاعدة البيانات؛ القيام بذلك سيجعلها أكثر احتمالًا لإعادة إنتاج المعلومات في قاعدة البيانات ولكن لا يضمن بأي حال من الأحوال أنها ستفعل ذلك.
أكاذيب، ‘هلوسات’ وهراء
هراء فرانكفورتي والأكاذيب
دخلت مسابقة نكتة، وبما أنني كنت أرغب حقًا في الفوز، قدمت عشرة مشاركات. كنت متأكدًا أن واحدة منها ستفوز، لكن لا نكتة في عشرة فعلت.
- سيُعتبر ذلك كذبة، حيث لم أدخل مثل هذه المسابقة أبداً (بروبس وسورنسن، 2023: 3). لاحقاً، يتم تنقيح هذا الرأي بحيث يكذب المتحدث فقط إذا كان ينوي أن يعتقد السامع في القول. الاقتراح بأن المتحدث يجب أن ينوي الخداع هو شرط شائع في الأدبيات المتعلقة بالكذب. وفقًا لـ “الحساب التقليدي” للكذب:
أن يكذبلإصدار بيان يُعتقد أنه غير صحيح لشخص آخر بنية أن يعتقد الشخص الآخر أن هذا البيان صحيح (Mahon، 2015).
“…ما يحاول بالضرورة خداعنا بشأنه هو مشروعه. السمة المميزة الوحيدة التي لا غنى عنها هي أنه بطريقة معينة يسيء تمثيل ما يقوم به” (2005: 54).
تمييزات سخيفة
على النقيض [من الخطاب غير المفهوم ببساطة]، فإن اللامبالاة بالحقيقة خطيرة للغاية. سلوك الحياة المتحضرة، وحيوية المؤسسات التي هي
“لا غنى عنه، يعتمد بشكل أساسي على الاحترام للتمييز بين الحقيقي والزائف. بقدر ما يتم تقويض سلطة هذا التمييز من خلال انتشار الهراء ومن خلال الموقف السطحي الذي يقبل انتشار الهراء كشيء غير ضار، يتم إهدار كنز إنساني لا غنى عنه” (2002: 343).
تشات جي بي تي هراء
تشات جي بي تي هو مدعي ناعم
يمكن وصف chatbot بأنه يمتلك نوايا، لكنه غير مبالٍ بما إذا كانت تصريحاته صحيحة. إنه لا يهتم ولا يمكنه أن يهتم بحقيقة مخرجاته.
تشات جي بي تي كهراء صعب
تشات جي بي تي آلة هراء
قد يكون ChatGPT متحدثًا صعبًا.
وظيفة أو نية مشابهة تبرر تسميته بمخترع القصص أو الكاذب أو الهلوسة.
لتبني الموقف القائم على النية
الأخطاء ونقل الأكاذيب. إذا كان ChatGPT يحاول فعل أي شيء، فهو يحاول تصوير نفسه كشخص.
هراء؟ هلوسات؟ اختراعات؟ الحاجة إلى مصطلحات جديدة
الخاتمة
أشاروا إلى أنهم لا يحاولون نقل المعلومات على الإطلاق. إنهم يتحدثون هراء.
References
Bacin, S. (2021). My duties and the morality of others: Lying, truth and the good example in Fichte’s normative perfectionism. In S. Bacin, & O. Ware (Eds.), Fichte’s system of Ethics: A critical guide. Cambridge University Press.
Cassam, Q. (2019). Vices of the mind. Oxford University Press.
Cohen, G. A. (2002). Deeper into bullshit. In S. Buss, & L. Overton (Eds.), The contours of Agency: Essays on themes from Harry Frankfurt. MIT Press.
Davis, E., & Aaronson, S. (2023). Testing GPT-4 with Wolfram alpha and code interpreter plub-ins on math and science problems. Arxiv Preprint: arXiv, 2308, 05713v2.
Dennett, D. C. (1983). Intentional systems in cognitive ethology: The panglossian paradigm defended. Behavioral and Brain Sciences, 6, 343-390.
Dennett, D. C. (1987). The intentional stance. The MIT.
Dennis Whitcomb (2023). Bullshit questions. Analysis, 83(2), 299-304.
Easwaran, K. (2023). Bullshit activities. Analytic Philosophy, 00, 1-23. https://doi.org/10.1111/phib. 12328.
Edwards, B. (2023). Why ChatGPT and bing chat are so good at making things up. Ars Tecnica. https:// arstechnica.com/information-technology/2023/04/
why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/, accesssed 19th April, 2024.
Frankfurt, H. (2002). Reply to cohen. In S. Buss, & L. Overton (Eds.), The contours of agency: Essays on themes from Harry Frankfurt. MIT Press.
Frankfurt, H. (2005). On Bullshit, Princeton.
Knight, W. (2023). Some glimpse AGI in ChatGPT. others call it a mirage. Wired, August 18 2023, accessed via https://www.wired. com/story/chatgpt-agi-intelligence/.
Levinstein, B. A., & Herrmann, D. A. (forthcoming). Still no lie detector for language models: Probing empirical and conceptual roadblocks. Philosophical Studies, 1-27.
Levy, N. (2023). Philosophy, Bullshit, and peer review. Cambridge University.
Lightman, H., et al. (2023). Let’s verify step by step. Arxiv Preprint: arXiv, 2305, 20050.
Lysandrou (2023). Comparative analysis of drug-GPT and ChatGPT LLMs for healthcare insights: Evaluating accuracy and relevance in patient and HCP contexts. ArXiv Preprint: arXiv, 2307, 16850 v 1.
Macpherson, F. (2013). The philosophy and psychology of hallucination: an introduction, in Hallucination, Macpherson and Platchias (Eds.), London: MIT Press.
Mahon, J. E. (2015). The definition of lying and deception. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (Ed.), https://plato.stanford.edu/archives/win2016/ entries/lying-definition/.
Mallory, F. (2023). Fictionalism about chatbots. Ergo, 10(38), 1082-1100.
Mandelkern, M., & Linzen, T. (2023). Do language models’ Words Refer?. ArXiv Preprint: arXiv, 2308, 05576.
Proops, I., & Sorensen, R. (2023). Destigmatizing the exegetical attribution of lies: the case of kant. Pacific Philosophical Quarterly. https://doi.org/10.1111/papq. 12442.
Sarkar, A. (2023). ChatGPT 5 is on track to attain artificial general intelligence. The Statesman, April 12, 2023. Accesses via https://www.thestatesman.com/supplements/science_supple-ments/chatgpt-5-is-on-track-to-attain-artificial-general-intel-ligence-1503171366.html.
Shah, C., & Bender, E. M. (2022). Situating search. CHIIR ’22: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval March 2022 Pages 221-232 https://doi. org/10.1145/3498366.3505816.
Weise, K., & Metz, C. (2023). When AI chatbots hallucinate. New York Times, May 9, 2023. Accessed via https://www.nytimes. com/2023/05/01/business/ai-chatbots-hallucination.html.
Weiser, B. (2023). Here’s what happens when your lawyer uses ChatGPT. New York Times, May 23, 2023. Accessed via https://www. nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html.
Zhang (2023). How language model hallucinations can snowball. ArXiv preprint: arXiv:, 2305, 13534v1.
Zhu, T., et al. (2023). Large language models for information retrieval: A survey. Arxiv Preprint: arXiv, 2308, 17107v2.
- Michael Townsen Hicks
Michael.hicks@glasgow.ac.uk
James Humphries
James.Humphries@glasgow.ac.uk
Joe Slater
Joe.Slater@glasgow.ac.uk
University of Glasgow, Glasgow, Scotland A particularly surprising position is espoused by Fichte, who regards as lying not only lies of omission, but knowingly not correcting someone who is operating under a falsehood. For instance, if I was to wear a wig, and someone believed this to be my real hair, Fichte regards this as a lie, for which I am culpable. Bacin (2021) for further discussion of Fichte’s position.
Originally published in Raritan, VI(2) in 1986. References to that work here are from the 2005 book version. In making this comment, Frankfurt concedes that what Cohen calls “bullshit” is also worthy of the name. In Cohen’s use (2002), bullshit is a type of unclarifiable text, which he associates with French Marxists. Several other authors have also explored this area in various ways in recent years, each adding valuable nuggets to the debate. Dennis Whitcomb and Kenny Easwaran expand the domains to which “bullshit” can be applied. Whitcomb argues there can be bullshit questions (as well as propositions), whereas Easwaran argues that we can fruitfully view some activities as bullshit (2023). While we accept that these offer valuable streaks of bullshit insight, we will restrict our discussion to the Frankfurtian framework. For those who want to wade further into these distinctions, Neil Levy’s Philosophy, Bullshit, and Peer Review (2023) offers a taxonomical overview of the bullshit out there.
This need not undermine their goal. The advertiser may intend to impress associations (e.g., positive thoughts like “cowboys” or “brave” with their cigarette brand) upon their audience, or reinforce/ instil brand recognition.Frankfurt describes this kind of scenario as occurring in a “bull session”: “Each of the contributors to a bull session relies…upon a general recognition that what he expresses or says is not to be understood as being what he means wholeheartedly or believes unequivocally to be true” (2005: 37). Yet Frankfurt claims that the contents of bull sessions are distinct from bullshit. - 5 It’s worth noting that something like the distinction between hard and soft bullshitting we draw also occurs in Cohen (2002): he suggests that we might think of someone as a bullshitter as “a person who aims at bullshit, however frequently or infrequently he hits his target”, or if they are merely “disposed to bullshit: for whatever reason, to produce a lot of unclarifiable stuff” (p334). While we do not adopt Cohen’s account here, the parallels between his characterisation and our own are striking.
Of course, rocks also can’t express propositions – but then, part of the worry here is whether ChatGPT actually is expressing propositions, or is simply a means through which agents express propositions. A further worry is that we shouldn’t even see ChatGPT as expressing propositions – perhaps there are no communicative intentions, and so we should see the outputs as meaningless. Even accepting this, we can still meaningfully talk about them as expressing propositions. This proposal – fictionalism about chatbots – has recently been discussed by Mallory (2023).
DOI: https://doi.org/10.1007/s10676-024-09775-5
Publication Date: 2024-06-01
ChatGPT is bullshit
© The Author(s) 2024
Abstract
Recently, there has been considerable interest in large language models: machine learning systems which produce humanlike text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Introduction
give the impression that this is what they’re doing. This, we suggest, is very close to at least one way that Frankfurt talks about bullshit. We draw a distinction between two sorts of bullshit, which we call ‘hard’ and ‘soft’ bullshit, where the former requires an active attempt to deceive the reader or listener as to the nature of the enterprise, and the latter only requires a lack of concern for truth. We argue that at minimum, the outputs of LLMs like ChatGPT are soft bullshit: bullshit-that is, speech or text produced without concern for its truth-that is produced without any intent to mislead the audience about the utterer’s attitude towards truth. We also suggest, more controversially, that ChatGPT may indeed produce hard bullshit: if we view it as having intentions (for example, in virtue of how it is designed), then the fact that it is designed to give the impression of concern for truth qualifies it as attempting to mislead the audience about its aims, goals, or agenda. So, with the caveat that the particular kind of bullshit ChatGPT outputs is dependent on particular views of mind or meaning, we conclude that it is appropriate to talk about ChatGPT-generated text as bullshit, and flag up why it matters that – rather than thinking of its untrue claims as lies or hallucinations – we call bullshit on ChatGPT.
What is ChatGPT?
objects. Large language models simply aim to replicate human speech or writing. This means that their primary goal, insofar as they have one, is to produce human-like text. They do so by estimating the likelihood that a particular word will appear next, given the text that has come before.
as Judge P. Kevin Castel put it, ChatGPT produced a text filled with “bogus judicial decisions, with bogus quotes and bogus internal citations”. Similarly, when computer science researchers tested ChatGPT’s ability to assist in academic writing, they found that it was able to produce surprisingly comprehensive and sometimes even accurate text on biological subjects given the right prompts. But when asked to produce evidence for its claims, “it provided five references dating to the early 2000s. None of the provided paper titles existed, and all provided PubMed IDs (PMIDs) were of different unrelated papers” (Alkaissi and McFarland, 2023). These errors can “snowball”: when the language model is asked to provide evidence for or a deeper explanation of a false claim, it rarely checks itself; instead it confidently producesmore false but normal-sounding claims (Zhang et al. 2023). The accuracy problem for LLMs and other generative Ais is often referred to as the problem of “AI hallucination”: the chatbot seems to be hallucinating sources and facts that don’t exist. These inaccuracies are referred to as “hallucinations” in both technical (OpenAI, 2023) and popular contexts (Weise & Metz, 2023).
generating. But this will only make it produce text similar to the text in the database; doing so will make it more likely that it reproduces the information in the database but by no means ensures that it will.
Lies, ‘hallucinations’ and bullshit
Frankfurtian bullshit and lying
I entered a pun competition and because I really wanted to win, I submitted ten entries. I was sure one of them would win, but no pun in ten did.
- would be regarded as a lie, as I have never entered such a competition (Proops & Sorensen, 2023: 3). Later, this view is refined such that the speaker only lies if they intend the hearer to believe the utterance. The suggestion that the speaker must intend to deceive is a common stipulation in literature on lies. According to the “traditional account” of lying:
To lie. to make a believed-false statement to another person with the intention that the other person believe that statement to be true (Mahon, 2015).
“…does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to” (2005: 54).
Bullshit distinctions
“In contrast [to merely unintelligible discourse], indifference to the truth is extremely dangerous. The conduct of civilized life, and the vitality of the institutions that are
indispensable to it, depend very fundamentally on respect for the distinction between the true and the false. Insofar as the authority of this distinction is undermined by the prevalence of bullshit and by the mindlessly frivolous attitude that accepts the proliferation of bullshit as innocuous, an indispensable human treasure is squandered” (2002: 343).
ChatGPT is bullshit
ChatGPT is a soft bullshitter
chatbot can be described as having intentions, it is indifferent to whether its utterances are true. It does not and cannot care about the truth of its output.
ChatGPT as hard bullshit
ChatGPT is a bullshit machine
ChatGPT may be a hard bullshitter
similar function or intention which would justify calling it a confabulator, liar, or hallucinator.
“To adopt the intentional stance
mistakes and convey falsehoods. If ChatGPT is trying to do anything, it is trying to portray itself as a person.
Bullshit? hallucinations? confabulations? The need for new terminology
Conclusion
pointed out, they are not trying to convey information at all. They are bullshitting.
References
Bacin, S. (2021). My duties and the morality of others: Lying, truth and the good example in Fichte’s normative perfectionism. In S. Bacin, & O. Ware (Eds.), Fichte’s system of Ethics: A critical guide. Cambridge University Press.
Cassam, Q. (2019). Vices of the mind. Oxford University Press.
Cohen, G. A. (2002). Deeper into bullshit. In S. Buss, & L. Overton (Eds.), The contours of Agency: Essays on themes from Harry Frankfurt. MIT Press.
Davis, E., & Aaronson, S. (2023). Testing GPT-4 with Wolfram alpha and code interpreter plub-ins on math and science problems. Arxiv Preprint: arXiv, 2308, 05713v2.
Dennett, D. C. (1983). Intentional systems in cognitive ethology: The panglossian paradigm defended. Behavioral and Brain Sciences, 6, 343-390.
Dennett, D. C. (1987). The intentional stance. The MIT.
Dennis Whitcomb (2023). Bullshit questions. Analysis, 83(2), 299-304.
Easwaran, K. (2023). Bullshit activities. Analytic Philosophy, 00, 1-23. https://doi.org/10.1111/phib. 12328.
Edwards, B. (2023). Why ChatGPT and bing chat are so good at making things up. Ars Tecnica. https:// arstechnica.com/information-technology/2023/04/
why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/, accesssed 19th April, 2024.
Frankfurt, H. (2002). Reply to cohen. In S. Buss, & L. Overton (Eds.), The contours of agency: Essays on themes from Harry Frankfurt. MIT Press.
Frankfurt, H. (2005). On Bullshit, Princeton.
Knight, W. (2023). Some glimpse AGI in ChatGPT. others call it a mirage. Wired, August 18 2023, accessed via https://www.wired. com/story/chatgpt-agi-intelligence/.
Levinstein, B. A., & Herrmann, D. A. (forthcoming). Still no lie detector for language models: Probing empirical and conceptual roadblocks. Philosophical Studies, 1-27.
Levy, N. (2023). Philosophy, Bullshit, and peer review. Cambridge University.
Lightman, H., et al. (2023). Let’s verify step by step. Arxiv Preprint: arXiv, 2305, 20050.
Lysandrou (2023). Comparative analysis of drug-GPT and ChatGPT LLMs for healthcare insights: Evaluating accuracy and relevance in patient and HCP contexts. ArXiv Preprint: arXiv, 2307, 16850 v 1.
Macpherson, F. (2013). The philosophy and psychology of hallucination: an introduction, in Hallucination, Macpherson and Platchias (Eds.), London: MIT Press.
Mahon, J. E. (2015). The definition of lying and deception. The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (Ed.), https://plato.stanford.edu/archives/win2016/ entries/lying-definition/.
Mallory, F. (2023). Fictionalism about chatbots. Ergo, 10(38), 1082-1100.
Mandelkern, M., & Linzen, T. (2023). Do language models’ Words Refer?. ArXiv Preprint: arXiv, 2308, 05576.
Proops, I., & Sorensen, R. (2023). Destigmatizing the exegetical attribution of lies: the case of kant. Pacific Philosophical Quarterly. https://doi.org/10.1111/papq. 12442.
Sarkar, A. (2023). ChatGPT 5 is on track to attain artificial general intelligence. The Statesman, April 12, 2023. Accesses via https://www.thestatesman.com/supplements/science_supple-ments/chatgpt-5-is-on-track-to-attain-artificial-general-intel-ligence-1503171366.html.
Shah, C., & Bender, E. M. (2022). Situating search. CHIIR ’22: Proceedings of the 2022 Conference on Human Information Interaction and Retrieval March 2022 Pages 221-232 https://doi. org/10.1145/3498366.3505816.
Weise, K., & Metz, C. (2023). When AI chatbots hallucinate. New York Times, May 9, 2023. Accessed via https://www.nytimes. com/2023/05/01/business/ai-chatbots-hallucination.html.
Weiser, B. (2023). Here’s what happens when your lawyer uses ChatGPT. New York Times, May 23, 2023. Accessed via https://www. nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html.
Zhang (2023). How language model hallucinations can snowball. ArXiv preprint: arXiv:, 2305, 13534v1.
Zhu, T., et al. (2023). Large language models for information retrieval: A survey. Arxiv Preprint: arXiv, 2308, 17107v2.
- Michael Townsen Hicks
Michael.hicks@glasgow.ac.uk
James Humphries
James.Humphries@glasgow.ac.uk
Joe Slater
Joe.Slater@glasgow.ac.uk
University of Glasgow, Glasgow, Scotland A particularly surprising position is espoused by Fichte, who regards as lying not only lies of omission, but knowingly not correcting someone who is operating under a falsehood. For instance, if I was to wear a wig, and someone believed this to be my real hair, Fichte regards this as a lie, for which I am culpable. Bacin (2021) for further discussion of Fichte’s position.
Originally published in Raritan, VI(2) in 1986. References to that work here are from the 2005 book version. In making this comment, Frankfurt concedes that what Cohen calls “bullshit” is also worthy of the name. In Cohen’s use (2002), bullshit is a type of unclarifiable text, which he associates with French Marxists. Several other authors have also explored this area in various ways in recent years, each adding valuable nuggets to the debate. Dennis Whitcomb and Kenny Easwaran expand the domains to which “bullshit” can be applied. Whitcomb argues there can be bullshit questions (as well as propositions), whereas Easwaran argues that we can fruitfully view some activities as bullshit (2023). While we accept that these offer valuable streaks of bullshit insight, we will restrict our discussion to the Frankfurtian framework. For those who want to wade further into these distinctions, Neil Levy’s Philosophy, Bullshit, and Peer Review (2023) offers a taxonomical overview of the bullshit out there.
This need not undermine their goal. The advertiser may intend to impress associations (e.g., positive thoughts like “cowboys” or “brave” with their cigarette brand) upon their audience, or reinforce/ instil brand recognition.Frankfurt describes this kind of scenario as occurring in a “bull session”: “Each of the contributors to a bull session relies…upon a general recognition that what he expresses or says is not to be understood as being what he means wholeheartedly or believes unequivocally to be true” (2005: 37). Yet Frankfurt claims that the contents of bull sessions are distinct from bullshit. - 5 It’s worth noting that something like the distinction between hard and soft bullshitting we draw also occurs in Cohen (2002): he suggests that we might think of someone as a bullshitter as “a person who aims at bullshit, however frequently or infrequently he hits his target”, or if they are merely “disposed to bullshit: for whatever reason, to produce a lot of unclarifiable stuff” (p334). While we do not adopt Cohen’s account here, the parallels between his characterisation and our own are striking.
Of course, rocks also can’t express propositions – but then, part of the worry here is whether ChatGPT actually is expressing propositions, or is simply a means through which agents express propositions. A further worry is that we shouldn’t even see ChatGPT as expressing propositions – perhaps there are no communicative intentions, and so we should see the outputs as meaningless. Even accepting this, we can still meaningfully talk about them as expressing propositions. This proposal – fictionalism about chatbots – has recently been discussed by Mallory (2023).
