analyzing / farming your photos for signs of unhealthy or risky living to send to insurance companies to "personalize" your rates. hth
Google has been doing this for a while. I get memory notifications of specific dogs. Me and my wife, My and Son, just my son, etc
It's always funny with iPhone users discover something new to them that Google/Android has had for a decade
Apple finally being forced off iMessage to use better technology the rest of the world uses is another
Recently obtained a license to a generative AI tool for lawyers and it is pretty crazy. Yesterday I uploaded a 200 page merger agreement and asked it to identify all the tax implications of the various transactions in the agreement, and it did a pretty damn good job. Only going to get better, too. Had me thinking I should find an in-house job sooner rather than later
Using AI to churn out cover letters tailored to job descriptions when applying for jobs is definitely one very positive application.
Yall think AI is pointless? How about it's intersection with 2000s shock memes and Crypto Currency? Shit's wild https://twitter.com/truth_terminal https://coinmarketcap.com/currencies/goatseus-maximus/
CoCounsel by Thomson Reuters. I should note that its tax analysis was good but not exactly spot on. The AI caught things my (tax attorney) associate missed with almost an entire week to review, but the AI’s analysis was “too compartmentalized” (focused on certain sections of the agreement and not connecting the tax dots between them), so its ultimate conclusion was incorrect. But at the rate AI is progressing, these kinks will be worked out in less than 5 years IMO
For tax and other legal questions, I find Claude to be great. I use it almost daily. But it doesn’t have the safeguards required by legal ethics rules, so I can’t upload documents and ask it to analyze them like I can with CoCounsel
I've tried using Google's AI on my phone and I'm not sure how it's different than a regular Google search. If you do a Google search you often get ai results that I find to be frequently wrong
I googled an issue I was having with an electronic piece of equipment this morning. It told me to hold down the reset button to do a hard restart which did not do anything. The future sucks so far
I’ve used ChatGPT, Gemini and Copilot and found them to all to suck. Test them with some obscure knowledge you have and the inaccuracy is wild. After OpenAI gave to Trump’s inauguration, they got deleted off my phone and can eat shit.
What is the function? If I ask it something I can fact check then ask something I can’t, why in the fuck would I trust the response? I ask you because that is your sweet spot.
It has to learn/be trained on something to know it. It hasn't learned all the pointless obscure knowledge you might know. If you have a need for something hyper specific, you create a Vector Database of say hyper specific company information or product knowledge, and pair it with an LLM to for a RAG application. I use it heavily for code generation & automation. Another example, I needed to create about 200 files that and needed synthetic data. I looped through an API and passed in the table structure and dictated the output format I wanted (json) and 20 rows of fake integers, strings, etc, then create 200 synthetic files I needed to test a process for a client. I use them heavily to help write more concise content for documents, SOWs, PPTs, etc. One example, I was driving and needed to send an email to a coworker. So I just dictated a project scope, overview, timeline, resources and had an LLM reformat it an email, then sent it off. It was internal so it didn't need to be perfect and they often made some weird hyper formal salutations, but it was fine. ChatGPT Canvas and a few others allows interactive editing and prompting insides the console and a few others will take control of you computer and actually create/edit inside of an open ppt or word or excel file. There are endless applications. It's not AIG, which people seem to be confusing the current iterations of LLMs for. I do think there are cost concerns, mainly from the training cost need. It's massive, but there have been some recent breakthroughs. If you want to search from some articles or vids on Deep Seek V3, they can explain it better, but it's all open sourced and they reduced the cost associated with training Facebook's Llama and ChatGPT o4 (~700 billion parameters) from their respective $500mil cost to around $5 million. Same parameter size, and is performing on par with those two models. That could be a massive step in the right direction of reducing energy consumption of these models
I use cursor all the time for code work, its pretty damn good but you have to know how to fix the errors but overall pretty cool product.
I have a coworker who'll use it for excel formulas. Still seems only marginally better than a Google search in this instance and you don't really learn how to use the formula going forward.
this is the worst post I’ve ever read on this message board and that’s saying something because I’ve read lots of your shitty posts
I undertand it but at its core you’re arguing that humans should adapt to AI rather than AI adapting to humans, and given the fact that we need rest and can’t work 24/7 ultimately the AI inorganic belongs will win and make things worse for the organic beings this is a bad path we’re headed down even if it’s slightly convenient for you now
Yeah, but what if the humans end up with too much rest and become so apathetic the AI decides they are not worth keeping around anymore?
I'm arguing it is a tool and technology that humans can use for a multitude of reasons. Resisting technology advances has never worked out. AI isn't going to replace humans. AI is only effective with correct Human prompting
We aren't going to be able to control adversaries from abusing technology unless we develop technology to combat it