This study examines the use of ChatGPT, with a primary focus on its capacity to generate academic spoken English. We compared discourse data from the Michigan Corpus of Academic Spoken English with output produced by the ChatGPT-4o model. The analyses revealed that ChatGPT-generated discourse exhibited a more diverse lexicon and longer clauses. Additionally, different tendencies were observed between the discourses of the student and the professors, such as contrasting results in readability. The findings offer novel insights into enhancing AI’s interactive capabilities with L2 users by aligning them more closely with the dynamics of human spoken communication.