I've thought something like a software archeology class would be really fun as an elective. I agree that it can make sense to use intentionally limited things especially if something is hard to teach otherwise. e.g. Learning to parse datasheets and probe things with an oscilloscope is best done by actually doing it, but starting off with an n-layer PCB instead of a breadboard would be pretty crazy. A benefit to using old things can sometimes be useful simplicity but also sometimes just being cheap. There's also a lot of interesting (if often commercially and methodologically irrelevant these days) things to teach as a matter of history.I agree it all needs to be well motivated. I'm often suspicious of attempts to teach things indirectly, but of course a lot of indirect learning happens anyway. And a lot (direct and indirect) is done in parallel and I think it's useful to look for places to usefully exploit that, especially when it comes to the conflict of college for pre-job-training vs. study. Do you really need a limited or obscure platform to teach or practice most things about debugging? printf and any debugger tool that supports break points and stepping would teach a lot, with modern (even graphical) tools having a lot less friction while not dampening what is learned. Bonus points if you actually teach more advanced debuggers so another generation of developers isn't released thinking only-the-basics console gdb + printf are the extent of what's available to help in the practice of debugging. A danger of only teaching limited or restricted tools is that students end up thinking that's all there is. This happens at every level from sorting algorithms to programming languages to whole ways of thinking about things. By artificially constraining the box in an attempt to focus on something basic or avoid clichés of other boxes, all too often the result is just that thinking doesn't generalize and is now crippled in the constrained box.
Timeline is important, I wonder if we're both interpreting Master's program quite differently here. In the US, a Bachelors program is typically 4 years while a Masters is typically 2, and many Masters are industry-oriented (no thesis, just classes/projects) rather than being like a stepping stone to full PhD research. The Duke program here seems to work as typical: 2 years + capstone project (and even seeming to require a summer internship). A longer program is in some ways a bit more forgivable for less than ideal teaching efficiency. (At my old school, the game design undergrads had a course that required designing physical board games. There are plausible arguments that board games as a medium make it easier to teach or focus on important things in design that are harder to teach with digital video games. But even if that's not really true (as I'm arguing here applies to the Playdate not being particularly useful over just normal PC/mobile development) at least it's just one course in many for the whole program. And at least there's a >$10bn market for board games.)
The Playdate features a mic, accelerometer, and crank as unique inputs, as well as being portable, that can suggest interesting game design ideas on their own. In one sense, if you want to use those features, it's simpler because you can count on them being there. In another sense, except for I guess the crank, the other two inputs are part of ~every phone and widely available on any PC/laptop. Developing for PC or mobile gives you access to even more interesting input and output for design consideration too: keyboards, mice (with/without scrollwheels), cameras, haptic feedback, gyroscopes, touch, light or temperature sensors, weird whatever devices over USB or wireless (Nintendo wiimotes, steering wheels, arcade sticks), networking... and making use of these things has never been easier, with drivers widely available and especially with the engines that let you click around to configure things. I would think that if your goal is to learn game design, you would want to prioritize doing your design on a platform that is as open and flexible as possible to allow exploring as much of design space as you can. Perhaps the teacher thinks it's useful to add artificial constraints to narrow the design space or focus from a certain perspective (like: let's design a multiplayer game, but with the constraint that you have only one device, no networking or multiple controllers), fine, but they don't need to start with a platform where those constraints are baked in to start with and can't be lifted.
Similarly Unreal as well as any of the other popular engines, along with any of the libraries like DirectX, SDL, raylib, pygame, or even just the web browser with HTML Canvas, are all open and flexible in what they allow you to explore in design space. Some are more limited than others (like you're going to have a hard time using a 2D-focused library or engine for a 3D game) and some are easier to express ideas in than others (you're going to have a better time using a 2D-focused library or engine for a 2D game) but they're all pretty easy to express basics in, and they all are pretty good at letting you rapidly prototype and playtest and iterate. If you artificially impose on yourself the same constraints as the Playdate has inherently, they can be even easier to use, and even easier yet if the teacher provides a template. Like browse the games on itch.io tagged with playdate, I don't think any would be particularly harder (and some may even be easier) to do in <random other tooling>. The article mentions it taking "months" to learn Unreal, which is true in some sense (it can be longer, especially if you don't already know C++), but false in another sense in that getting up and running is quick, any competent introduction will have the student getting something on screen and responding to their input within an hour. For the very basic stuff a typical Playdate game does it won't take that long to learn to do it with Unreal.
Another way of looking at it: take the "Owl Invasion" example from the article, "an endless wave-based action game with tower defense mechanics." Unlike the other game, there's no mention of using any of the unique inputs of the Playdate, so is there anything fundamentally unique about the Playdate that suggests such a game would be easier to develop for it vs. using an arbitrary other tool? Was there anything learned about game design from the experience that wouldn't have been learned otherwise? What if you had mandated the same visual constraints for resolution and (lack of) color but artificially? Was it useful to be forced to incorporate an owl somehow, vs. a rat, vs. a pirate, vs. having no restrictions? (This one perhaps, even creative writing workshops like to require something to incorporate, but this is more about trying to unblock creativity and avoid decision paralysis rather than directly learning some principle.) If the impact of using Playdate vs. something else is fairly arbitrary for accomplishing the teaching goals, then unless the student is particularly interested in Playdate on their own, it's more beneficial among several axes to use something else.