Against Slickness: A Framework for AI Literacy in Design Education

This framework adapts design education practices alongside the Understand, Experiment, Critique, Integrate model, which is widely applied in fields such as engineering and software development, and it also draws on the 4D Fluency Framework (Delegation, Description, Discernment, Diligence) to address the specific challenges faced by creative professionals and students (Dakan & Feller, n.d.). It positions AI not as a neutral tool but as a technology that actively shapes, constrains, and provokes design practice (Ihde, 1990).


Shiny but Hollow: The Problem of AI “Slickness”

Effortless perfection discourages iteration, exploration, and critique. In the context of generative AI, slickness refers to the polished, visually appealing outputs that AI systems produce with minimal effort. These results are seductive because they appear professional, finished, and ready-to-use. But slickness comes with risks:

  • Surface over depth: The output looks impressive but may lack originality, conceptual grounding, or rigor.

  • Homogenization: AI systems tend to reproduce aesthetic defaults based on training data, which narrows rather than expands creative diversity (Wadinambiarachchi, Kelly, Pareek, Zhou, & Velloso, 2024).

  • Shortcut thinking: Students may rely on AI’s polish instead of engaging in iterative, messy, and exploratory processes that build genuine design skill.

  • Erosion of authorship: Overvaluing slick AI outputs can obscure the role of human decision-making, reducing students to prompt writers rather than critical designers (UNESCO, 2021).

Polish Without Depth

The Understand, Experiment, Critique, Integrate framework, reinforced by Anthropic’s 4Ds, equips students and educators to resist the allure of polished AI outputs and reframe them as starting points for deeper learning. By unpacking defaults, treating prompts as design material, subjecting outputs to critique, and embedding AI within embodied practice, the framework restores rigor and reflection to design education.


The framework unfolds across four principles, each offering a practical way to resist slickness and reframe AI as part of a deeper design process:

1. Understand /  Delegation

Choosing what only human hands can do, and what AI should assist with.

  • Situate AI as a creative partner whose influence must be examined, not passively accepted.

  • Decide what aspects of a project should remain in human hands and what can be reasonably delegated to AI.

  • Map its strengths (speed, variation, visualization) and limits (bias, superficiality, homogeneity).
    Unpack how training data and model design encode aesthetic defaults and cultural assumptions.

  • Build a shared critical vocabulary to interrogate AI outputs with the same rigor applied to sketches, models, and precedents.

2. Experiment / Description

Giving AI precise direction so its outputs become material, not noise.

  • Treat prompts as a design material, requiring iteration, testing, and refinement.

  • Learn to describe intentions clearly and with context, guiding AI with purpose rather than vague instructions.

  • Generate abundant variations, then resist the temptation to settle on the “slickest” outcome.

  • Reframe “errors” or “hallucinations” as productive divergences that challenge habitual thinking.

  • Keep a design log of prompt-output cycles, as a reflective archive akin to sketchbooks.

3. Critique / Discernment

Seeing past polish to judge originality, authorship, and impact.

  • Subject AI outputs to disciplinary criteria: originality, feasibility, authorship, and cultural impact.

  • Exercise discernment to surface biases and omissions, asking who or what is excluded from the AI’s imagined futures.

  • Position outputs as provocations for discussion and revision, not as end points.

  • Incorporate collective critique: AI-assisted work must be debated in studio, not consumed in isolation.

4. Integrate / Diligence

Taking responsibility for how AI enters the workflow, and owning the outcome.

  • Layer AI exploration with embodied practices like sketching, material play, or prototyping.

  • Place AI at specific junctures in the workflow [ideation, iteration, visualization] without allowing it to dominate.

  • Maintain authorship and accountability: explicitly track where AI intervenes and where human decision-making drives outcomes.

  • Frame AI as a lens, not a substitute, expanding possibilities while preserving the discipline of design judgment.


In the Classroom

In practice, the framework encourages assignments that deliberately push against slickness:

  • Asking AI to attempt what it cannot easily do (emotional nuance, cultural specificity, embodied gestures).

  • Having students remix their own sketches with AI and reflect on where the output diverges from their voice.

  • Requiring comparative work (with AI vs. without AI) so students analyze what is gained in polish versus what is lost in nuance.

Assignments

Exercise 1: Expose the Boundaries

Prompt: Choose a project and intentionally ask AI to produce something it should not be good at (for example: emotional nuance, cultural specificity, or embodied gestures).

Example: Public restroom sign. Students prompt AI to create variations of restroom signage. The AI may stylize the symbols with glossy gradients, futuristic silhouettes, or playful icons, but often fails on the essentials: legibility at a distance, cultural inclusivity (beyond male/female binaries), and universal accessibility standards.

Questions for Discussion:

  • Where exactly does the AI fall short? Does it produce signs that look trendy but are confusing in real-world use?

  • What design qualities (clarity, universality, inclusivity) depend on human judgment and cannot be delegated to AI?

  • How do these failures expose the difference between surface polish and design integrity?

  • In what ways does this exercise highlight the value of human expertise in applying cultural knowledge, empathy, and usability principles?

The goal is for students to see AI’s limits not as flaws to fix, but as revealing what human designers uniquely contribute: empathy, context, and responsibility.


Exercise 2: Remix Your Own Voice

Prompt: Feed AI your own creative material, such as sketches, text, or project notes, and ask it to generate iterations.

Example: In a transportation design class, students sketch their own concept for an electric scooter. Each sketch reflects a different design voice. Some emphasize compact folding mechanisms, others highlight bold frame geometry or sustainable materials. Students then use AI to generate iterations of their sketches. Some of these outputs extend the student’s ideas in meaningful ways, such as exaggerating their chosen geometry, while others drift into generic “sci-fi” aesthetics that ignore practical concerns like balance, weight distribution, or battery integration.

Questions for Discussion:

  • Which AI outputs genuinely carry forward the student’s design vision, and which ones flatten it into a trendy template?

  • How can students decide which variations feel like authentic extensions of their voice versus distortions or generic filler?

  • What blind spots does the AI expose? For example, did the AI reveal hidden habits in the student’s sketches such as over-reliance on symmetry or exaggerated angles?

  • How can remixing AI outputs sharpen the student’s awareness of their personal style while also challenging them to critique what does not align with their intentions?

The purpose of this exercise is to help students distinguish between authorship and automation. It encourages them to identify what truly belongs to their design identity and to see AI as a mirror that sometimes extends their voice and sometimes distorts it, requiring careful reflection.


Exercise 3: Double Vision

Prompt: Present one design solution developed without AI and one with AI. Compare the two and reflect on the trade-offs.

Example: In a product design class, students are asked to design a bicycle helmet. First, they sketch and model the helmet by hand, focusing on ergonomics, airflow, safety standards, and manufacturability. Then, they use AI to generate AI-assisted variations of the same concept. While the AI quickly produces a wide range of polished renderings,  they often ignore the subtleties of comfort, fit, and safety engineering.

Questions for Discussion:

  • What did the AI-assisted process contribute in terms of speed, visual polish, or stylistic exploration?

  • What was lost compared to the hand-developed design, such as nuanced ergonomics, originality, or technical rigor?

  • How might the two approaches be combined? For example, using AI to explore surface styling and variations while relying on manual sketching and prototyping to resolve proportions, ventilation, and impact safety.

This exercise helps students observe the use of AI not as a replacement but as a complement. By putting the two solutions side by side, they learn to evaluate where AI adds value and where human judgment and expertise remain irreplaceable.


Exercise 4: Reverse-Engineer

Prompt: Select a familiar object (for example, a chair, camera, or shoe) that already exists. Attempt to recreate it using AI prompts and sketches.

Example: In a consumer product design class, students bring in a common sneaker from a well-known brand. They carefully study its proportions, stitching, materials, and sole pattern, then attempt to reproduce the sneaker using a combination of sketches and AI prompts. The AI produces renders that may capture the general silhouette and color blocking but often miss subtle details such as stitching patterns, the texture of knit materials, or the precise ergonomic shaping of the sole. 

Questions for Discussion:

  • How closely did the AI outputs match the real sneaker in terms of scale, proportion, and material accuracy?

  • Which design elements translated successfully, and which ones were oversimplified, distorted, or misinterpreted?

  • What does this gap reveal about the difference between human observation and AI’s reliance on learned patterns?

  • How could students adjust their sketches, refine their prompts, or combine both to push the AI closer to their intended outcome?

This assignment trains students to observe with precision while also understanding the limits of AI’s “understanding.” By reverse-engineering something familiar, they strengthen both their own design eye and their ability to guide AI more intentionally.


Final Thoughts

Generative AI should not be allowed to collapse design education into the production of polished artifacts. Its promise lies in expanding the scope of creative practice while safeguarding the discipline’s foundational methods and values. By bringing together design-centered principles (Understand, Experiment, Critique, Integrate) with Anthropic Academy’s 4Ds of AI fluency (Delegation, Description, Discernment, Diligence), designers and educators can:

  • Protect and evolve core disciplinary skills even as they adopt new computational tools.

  • Cultivate critical literacy that resists the drift toward technological determinism.

  • Embed ethical reflection into daily practice rather than treating it as an afterthought.

  • Recast AI not as a replacement for human ingenuity, but as a provocative collaborator capable of reshaping design cultures.

__

References

Dakan, R., and Feller, J. Framework for AI Fluency (Practical Summary Document). Version 1.1. 2025. https://ringling.libguides.com/ai/framework

Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.

Wadinambiarachchi, S., Kelly, R. M., Pareek, S., Zhou, Q., & Velloso, E. (2024). The effects of generative AI on design fixation and divergent thinking. arXiv preprint arXiv:2403.11164. https://arxiv.org/abs/2403.11164

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Paris: UNESCO.

Previous
Previous

Beat the AI Slop: New Creative Literacy for Working Artists

Next
Next

The Hidden Curriculum of AI: Obedience, Not Curiosity