I've been writing a new textbook for undergrads (chemistry domain focus), and think this excerpt is generally solid advice that is applicable here. Any feedback is welcome (textbook to be published gplv3 via GitHub). I appreciate I am on the conservative side here. The following is copy-paste of the final notes/tips/warnings in the book, copied from latex source with minimal edits for display here:Rather than viewing AI as forbidden or universally permitted, consider this progression:
1. Foundation Phase (Avoid generation, embrace explanation)
When learning a new library (e.g., your first RDKit script or lmfit model), do not ask the AI to write the code.
Instead, write your own attempt, then use AI to:
• Explain error tracebacks in plain language
• Compare your approach to idiomatic patterns
• Suggest documentation sections you may have missed
2. Apprenticeship Phase (Pair programming)
Once you can write working but inelegant code, use AI as a collaborative reviewer:
• Refactor working scripts for readability
• Vectorize slow loops you have already prototyped
• Generate unit tests for functions you have written
3. Independence Phase (Managed delegation)
When you have the skill to write the code yourself but choose to delegate to save time, you are essentially trading the effort of writing for the effort of auditing. Because your prompts are condensed summaries of intent rather than literal instructions, the LLM must fill the "ambiguity gap" with educated guesses.
Delegation only works if you are skilled enough to recognise when those guesses miss the mark; if your words were precise enough to never be misunderstood, they would already be code. Coding without oversight is dangerous and deeply incompetent behaviour in professional environments.
Examples of use-cases are:
• Generate boilerplate for familiar patterns, then audit line-by-line
• Prototype alternative algorithms you already understand conceptually
• Document code you have written (reverse the typical workflow)