My whole team was playing around with it and for a few weeks it was working pretty well for a coupl3 of things. Until the answers started to become incorrect and not useful.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
I use ChatGPT fairly frequently. For example, I often have to write a business email. I'm usually pretty good at it. But sometimes I don't have the time or desire to find the right wording. This is where ChatGPT comes into play: I have trained my writing style using several examples and then simply have the quickly written emails beautified.
My boss doesn't know about it, but I don't hide it either. My company is very, very slow on the technical side and will only understand the benefits of AI in a few years.
Trying to do our work with ChatGPT and then discussing the results has been our #1 topic of kitchen conversation all year.
I use GPT-4 daily. I worked with it to create a quick and convenient app on my smartwatch, which allows it to provide wisdom and guidance fast whenever I need it. For more grandular things, I use its BingChat interface which can search the web and see images. The AI has helped me with understanding how to complete tasks, providing counseling for me, finding bugs in my code, writing functions, teaching me how to use software like Excel and Outlook, and giving me random information about various curiosities that pop into mind.
I don't keep it a secret and tell anyone who asks. Plus it's kinda obvious that something is going on with me. I always wear bone conducting headsets that allow the AI to whisper in my ear without shutting me out to the world, and sometimes talk to my watch
The responses to knowing what I'm doing have almost always been extreme: very positive or very negative. The machine is controversial, and when some can no longer stay in comfortable denial of its efficacy they turn to speaking out against its use
Edit: just fixed its translation method. Now the watch will hear non-english speech and automatically translate it for me too (uses Whisper API)
I've found ChatGPT is good for small tasks that require me to code in languages I don't use often and don't know well. My prime examples are writing CMakeLists.txt
files, and generating regex patterns.
Also, if I want to write a quick little bash script to do something, it's much better at remembering syntax and string handling tricks than me.
I know many people my slightly younger than me are using chatgpt to breeze though university assignments. Apparently there's one website that uses gpt that even draws diagrams for you, so you don't have to make 500 UML and class diagrams that take forever to create.
If only they would also understand what theyβre delivering.
My job actively encourages using AI to be more efficient and rewards curiosity/creative approaches. I'm in IT management.
Coworker of mine admitted to using this for writing treatment plans. Super unethical and unrepentant about it. Why? Treatment plans are individual, and contain PII. I used it for research a few times and it returned sources that are considered bunk at best and hated within the community for their history. So I just went back to my journal aggregation.
Super unethical and unrepentant about it.
Super illegal in most jurisdictions too.
We openly use it and abuse of it from top to bottom of the company and for me add Co-Pilot to that as well
I use it as a search engine for the LLVM docs.
Works so much better than doxygen.
But it's no secret.
Not using chatgpt at all because it's queue is always full.
See this confuses the hell out of me. I've NEVER been prevented from using ChatGPT by a queue. It's always saying that it's a downside to not paying for it but seems like I just always choose the times that no one is using it.
I am the boss and I've had to cajole a couple of my employees into using it.
Any employer that thinks using ChatGPT carefully and judiciously is a bad thing is mistaken. When it works it's a great productivity boost, but you have to know when to kick it to the curb when it starts hallucinating.
English is not my first language. I use it to fix grammar and rephrase sentences for making communication easy.
The platform/language that I use doesn't supported by chat GPT or Bard. So I write my own code.
As a backend developer I use it to explain some SQL, dev processes that I should know but unsure on, or best practices for X.
SQL is my weakest link.
I've run emails through it to check tone since I'm hilariously bad at reading tone through text, but I'm pretty limited in how I can make use of that. There's info I deal with that is sensitive and/or proprietary that I can't paste into just any text box without potential severe repercussions.
Aside from asking it coding questions (which are generally a helpful pointer in the right direction), i also ask it alot of questions like βTurn these values into an arrayβ or something similar when i have to make an array of values (or anything else thatβs repetitive) and am too lazy to do it myself. Just a slight speedup in work.
As a coder, we have had discussions about using it at work. Everyone's fine with it for generation of test data, or for generating initial code skeletons but it still requires us to review every line. It saves a bit of time but that's all.
I use it at work but gladly tell the boss... It's only pluses if we can do more trivial work faster. More time to relax. They don't watch what I do during the day. The boss relaxes also. All good.
I use it to speed up writing scripts on occasion, while attempting to abstract out any possibly confidential data.
I'm still fairly sure it's not allowed, however. But considering it would be easy to trace API calls and I haven't been approached yet, I'm assuming no one really cares.
I use it
My boss likes it too. Of course we dont trust it m, but it can do certain things easier and faster than a human can
Im using the shit out of gpt-4 for coding and it works. And no never told anyone cause nobody asks.