If you pursue this, be aware that consumers are tired of AI and don't believe it works well. You need to show that it's "insights" are reliable, accurate, and useful.
If you're stuck in traffic, stressed out because you're running late, do you think it's helpful to have a notification on your phone pop up and tell you "Stress slightly elevated this afternoon"? Do you think the AI could suggest solutions that the user won't be upset to see ("Try to relax with a breathing exercise...")?
If you ask it "Why do I feel tired today?" do you think it's helpful to get a chatGPT response listing bullet point reasons for people commonly being tired? You already know if you didn't get enough sleep, slept poorly, are burnt out, skipped a meal, haven't been exercising regularly, are recovering from a recent workout, are recovering from illness, or haven't been drinking enough water. Can the data collected actually identify a specific cause? Can the AI then suggest a specific, actionable solution?
> A good example among analog controllers is the Atari one that had a variable capacitor and the capacitance was measured to infer its position. Although the measurement is digital, the controller, yes, was analog.
An abacus allows you to slide beads along a rod. Similar to your example of an analog controller, the device itself is analog but the measurement is digital. The traditional way to use an abacus is to slide beads from one end to the other with beads on one end counting as 1-5 (or multiples of 5, or so on). But you don't have to use it like that. You could use just 1 bead on each rod to represent a value from 0 to 5 with its position along the rod, or even a value from 0 to 100 with its position. Heck, you could use two beads on the rod to represent a range using their positions. Using this logic, an abacus is analog but the traditional way of interpreting them is digital.
One could argue that fingers are analog in the same way as abacus beads or electronic signal voltages in "digital" circuits. Yes, the traditional way to count on your fingers is to count each finger held up as a value of 1 and then add up the number of fingers to get the represented value, but fingers can be anywhere between entirely up and entirely down. You could hold a finger halfway up and count it as 0.5.
If you feel that argument falls under remark 3, I think you have some options:
1) Resolve the conflict between your example of an analog controller in remark 2 and your refusal to consider interpreting an analog signal/state as a discrete value as digital (as expressed in remark 3).
2) Accept that trying to fit everything in the real world into strict definitions is a fool's errand. Definitions are essentially simplified models that allow us to represent some aspect of the real world, but they can never entirely encapsulate the nature of the real world (the map is not the territory).
Let's stick with option 1 because it's more practical than philosophical (although, exploring option 2 may help you cope with life better in the long run). You can go with option 1 by simply dropping remark 3 entirely and accepting that a device can be analog in its physical form and digital in an interpretation.
Alternately, you can accept that the context affects which model best describes an abacus/fingers/electronic signal because your interpretation defines what the values represent. That is, the abacus has no representation of "internal values" -- it doesn't care if the beads are supposed to be 1's, 5's, fractions, or space ships. What each bead represents lies entirely in the person looking at the beads, not the abacus.
An example in that line of thought: the computer engineer designing a chip has to face the reality that electronic signals are analogue so he can design a chip that functions properly. In his context of work, the chip is analogue. The software engineer who uses that chip needs to know very little about the underlying hardware and is able to model its behavior as entirely digital. In his context of work, the chip is digital.
I'm not going to argue against engineers using AI coding tools to write boilerplate code faster. I certainly think it's a useful tool for that.
But outside of that context, it's problematic to argue that "you can't tell if something was created by AI just by looking at it. And if you can't tell the difference, then the difference doesn't matter."
It feels like we aren't too far away from AI being indistinguishably good at other things. Actors would obviously be upset if you started producing movies with their likeness without paying them (and without them shooting a single scene). Screen writers, voice actors, authors, and artists would be similarly upset. Fans have already rallied against video game studios that try to use AI to replace artists.
I certainly think the "if you can't tell the difference, then the difference doesn't matter" test is problematic when you look at video shared with news stories.
So what makes writing code different? Is it because consumers of movies, television, books, and art care if AI took a job away while consumers of code don't? Is it because people who write code don't really care about writing boilerplate and just want to get past that to bigger, better things? Is it because a lot of people writing code don't like it at all and only got into it for the money?
I don't think the knee-jerk reaction to reject all AI generated content is misplaced. AI raises real questions and creates real problems that we need to address instead of simply dismiss because writing CRUD is boring.
> But outside of that context, it's problematic to argue that "you can't tell if something was created by AI just by looking at it. And if you can't tell the difference, then the difference doesn't matter."
I agree wholeheartedly. This argument is just "the ends justify the means" in different words. Sadly, there are far too many people who actually think that's true.
I certainly expect Tesla to use the cameras on their cars for similar purposes if they haven't already. Although I would expect them to distance themselves from it by selling the location data 'in aggregate' to another company that interfaces with law enforcement agencies.
It's been overshadowed by Python which has a sexier image because it isn't associated with Microsoft. It certainly doesn't help that in the beginning C# was a Windows only language tied to the .NET framework. It's taken a decade for word to get out that it has evolved past that.
From just trying things randomly, I think the objective is to get each numbered square to 'claim' or 'path into' as many empty squares as the number shows. That is, if you see a red square that shows a 3 on it, you need to click on that red square, then click on an adjacent square to 'claim' it for that red square, then click on a square adjacent to the one you just clicked to 'claim' it as well, and so on.
Your options, from worst to best:
- add a tutorial (everybody hates tutorials)
- add some concise text to the bottom of every page that explains the objective and how to play
- find a theme that makes the objective and how to play intuitive
Other feedbacK:
- You absolutely need to show which square is selected (if any).
- There's either a bug or some condition for play that I don't understand, but you can't try to occupy a square more than once even if the square is empty because a previous attempt to occupy it was canceled/reverted.
Ah, its a bug, thanks for pointing that out (you should be able to path to a square that was previously pathed but is currently empty), working on it now.
Yes, in this game you lay "pipes" from "depots", and must fill the entire grid with pipes.
Thanks for the feedback, adding some text with the basic rules now.
I initially had the first "tutorial" level a 4x4 grid with a [3] in each corner, but the 4x4 levels were trivial and I removed them.
I don't want to defend Docker because I'm not a fan, but I can tell you that as a dev the appeal is controlling the environment instead of specifying it.
Lots of Docker fans don't think about what version of libc or java is in their image. They start with a base image, develop code that works there, and release the docker container without ever thinking about it.
If it's an open source project, ignoring bugs that occur outside the official docker build cuts out a lot of work. Inevitably someone with no Linux experience will try to setup Slackware on a raspberry pi to run your project because they read in a forum post that 'real nerds run slackware.' When it doesn't work and they open a bug report, you can spend the next year trying to teach them enough about Linux for them to fix their system and run your project. Or you can come across as mean by saying RTFM. Or you can just avoid all that by pointing them to the Docker image and admitting "I only have the resources to ensure this works in that specific environment."
If it's a corporate project, it safeguards against other devs or someone in IT being foolish. Despite having rules and procedures in place, I've seen plenty of instances where IT or a dev changed something in production environments without warning or announcing it because they thought it would be fine.
Yeah but if you "don't think about what version of libc or java is in their image" either you will find out that your image cares or the programmer who comes along to maintain it will. At one job Docker seemed to give data scientists the superpower of finding a different broken Python image for every image they built, these would work OK in dev and test and then blow up in production.
My early history with Docker was terrible because it just didn't work for me because I had a slow ADSL connection and any attempt to download images would time out and fail. (I guess reliable downloads are a "call for prices" kind of feature) Later on working at an office with gigabit I found that Docker only increased build times by a factor of 2-10x depending on what I was doing.
I was wanting to build an image sorter last year and wanting to try the danbooru image board software, the git repository says just do
docker compose
and I get a bunch of incomprehensible error messages, turns out that the compose configuration is two versions old. Could I revert compose on my system to the old version? Maybe. Probably doesn't break anything running on my machine but I'd rather not find out. Could I update the configuration file? I guess. But my internet connection still isn't that fast and I could go through a lot of run-break-fix cycles just to learn "you can't get here from there". So I cut-and-pasted the framework code out of one of my other projects and coded a minimal product up in a weekend, then had to spend another weekend adding features I tried to get away with not implementing.
Huh. In 20 years of using gmail I can't remember ever seeing a phishing email in my inbox (they're all filtered out as spam so I never see them). I'm curious what's led to our different experiences.
I've used the same email since I was a kid and gave my email to any website that would ask for it without a thought. So now I'm facing the consequences. My email is just my name (which is very common) so I'm fortunate to have it and never wanted to make a new one.
If you're stuck in traffic, stressed out because you're running late, do you think it's helpful to have a notification on your phone pop up and tell you "Stress slightly elevated this afternoon"? Do you think the AI could suggest solutions that the user won't be upset to see ("Try to relax with a breathing exercise...")?
If you ask it "Why do I feel tired today?" do you think it's helpful to get a chatGPT response listing bullet point reasons for people commonly being tired? You already know if you didn't get enough sleep, slept poorly, are burnt out, skipped a meal, haven't been exercising regularly, are recovering from a recent workout, are recovering from illness, or haven't been drinking enough water. Can the data collected actually identify a specific cause? Can the AI then suggest a specific, actionable solution?