This paper investigates the philosophical implications of Large Language Models (LLMs), such as ChatGPT, in light of classical and contemporary theories of mind and consciousness. The study revisits key philosophical debates to evaluate the plausibility of machine consciousness. Based on recent discussions, including Chalmers' 2023 analysis, the paper argues that LLMs may not yet exhibit consciousness, but their development demands new conceptual frameworks. In conclusion, it suggests that traditional notions of mind may be insufficient to grasp the cognitive nature of emerging technologies and that a new ontology of the mind might be necessary.