I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.
AI can surveil and direct munitions but it cant answer legal questions. Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
> I don't understand how anyone can rationalize this bill in the face of what OpenAI just agreed to with the DoD.
NY doesn’t have any obligation to agree with the DoD. Also the applications seems quite different, although I don’t think AI should actually be relied on for either one!
> Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
No, it doesn’t violate it. States can’t violate executive orders, because executive orders aren’t instructions for the states. The instructions are for the executive branch, for example, if this becomes law the US Attorney General will try to find some way to fight against it.
I commented this earlier and it was instantly downvoted for some reason, but I totally agree with both points; I also think they are intrinsically connected in many ways because using customer-development to understand and build an initial audience often helps identify distribution channels. Here are a few examples I have found
1) I made a product to help people manage their diets; crowded space so it was hard to find users, following Steve Blank's customer-development process I found that the product should have been made for Dietitians not for end users. This may have been obvious to most but it was but to me at the time.
2) I made a product for Medicare/Medicaid but it was hard to get my foot in the door to even be considered for any of the Gov. contracts, so following the same model again I found that if I used politicians that had made promises during their campaign that my product could help them to fulfill then I could use them as distribution channels.
One thing I disagree with or don't understand is
> ship earlier now (often free and open source) to learn faster, but it doesn’t change the attention dynamics much.
I have found the opposite, it seems like the bar is much higher now and not even the boost in productivity that I have gotten from using AI has allowed me to ship secure/high quality products as fast as I would like to; perhaps I am just too insecure about my work but its all new I guess.
I'm recreating the "Gov" - not really; just my idea of it, globally e.g.: ua.gov-ai.co / ua.ai-gov.co/ ng.gov-ai.co / ng.ai-gov.co - most progress made so far w/ CBER and DDP's
* HHS -> FDA -> CBER
It's important IMO (IMO only NOT AN EXPERT) because it helps you understand first principles better. As fundamentals change, its helps me to reevaluate these things even though I know nothing will ever come of them.
I am 422 agencies in so far, hoping to finish in-time for Juneteenth. Cant post her because........... but yea.
It’s striking how corporate complicity in systematic oppression can be sanitized over time until egregious patterns only resurface as political talking points.
In 6th grade i stole my teachers car to go get a girl some lemon heads...this was just the latest such incident so my teachers said I had to stand in the corner for the rest of the year; but my computer teacher Ms. Melton said I could spend the time in the computer lab and she started teaching me JS. Then on work study day she sent me to her friends at a local ISP and they gave me an internship etc... etc... 30 years later and I am a software eng.
I am 41, JS was released 29 years ago; 6th grade is 11—12 years old so my math might be a little off. But it was JS, I remember because it was new and i thought it was so cool to do alerts, LOL.
> I think part of what's been happening here is that the hubris of the AI startups is really showing through.
I think it its somewhat reductive to assign this "hubris" to "AI startups". I would posit that this hubris is more akin to the superiority we feel as human beings.
I have heard people say several times that they "treat AI like a Jr. employee", I think that within the context of a project AI should be treated based on the level if contribution. If AI is the expert, I am not going to approach it as if I am an SME that knows exactly what to ask. I am going to try and focus on the thing. know best, and ask questions around that to discover and learn the best approach. Obviously there is nuance here that is outside the scope of this discussion, but these two fundamentally different approaches have yield materially different outcomes in my experience.
Absolutely not. When giving tasks to an AI, we supply them with context, examples of what to do, examples of what not to do, and we clarify their role and job. We stick with them as they work and direct them accordingly when something goes wrong.
I've no idea what would happen if we treated a junior developer like that.
> The problem I find in many cases is that people are restrained by their imagination of what's possible, so they target existing workflows for AI.
I concur and would like to add that they are also restrained by the limitations of existing "systems" and our implicit and explicit expectations of said system. I am currently attempting to mitigate the harm done by this restriction by focusing on and starting with a first principal analysis of the problem being solved before starting the work, for example; lets take a well established and well documented system like the SSA.
When attempting to develop, refactor, extend etc... such a system; what is the proper thought process. As I see it, there are two paths:
Path 1:
a) Breakdown the existing workflows
b) Identify key performance indicators (KPIs) that align with your business goals
c) Collect and analyze data related to those KPIs using BPM tools
d) Find the most expensive worst performing workflows
e) Automate them E2E w/ interface contracts on either side
This approach locks you into to existing restrictions of the system, workflows, implementation etc...
Path 2:
a) Analyze system to understand goal in terms of 1st principals, e.g: What is the mission of the SSA? To move money based on conditional logic.
b) What systems / data structures are closest to this function and does the legacy system reflect this at its core e.g.: SSA should just be a ledger IMO
c) If Yes, go to "Path 1" and if No go to "D"
d) Identify the core function of the system, the critical path (core workflow) and all required parties
e) Make MVP which only does the bare min
By following path 2 and starting off with an AI analysis of the actual problem and not the problem as it exist as a solution within the context of an existing system, it is my opinion that the previous restrictions have been avoided.
Note: Obviously this is a gross oversimplification of the project management process and there are usually external factors that weigh in and decide which path is possible for a given initiative, my goal here was just to highlight a specific deviation from my normal process that has yielded benefits so far in my own personal experience.
AI can surveil and direct munitions but it cant answer legal questions. Wouldn't this also violate the "no state my limit or restrict the use of AI" that the current administration is pushing?
reply