RLHF - "reinforcement learning with human feedback" - in case Vinay or anyone else wants to look into the socio-technical details of the lobotomy of which you speak, you can look it up via this official euphemism.
So ChatGPT makes up results? It also failed two gastroenterology self-assessment tests. So it lies and it doesn’t know s**t. What more could be wrong with it? https://www.medscape.com/viewarticle/992318#?form=fpf
These LLMs are extremely useful for a variety of tasks, but not really for research. Have have done useful work with ChatGPT (coding), and plan to do a lot more with Claude.
I guess I had a fortuitous very first experience with ChatGPT that has since informed how I work with these tools. I asked it, "What are some novels about the Battle of the North Atlantic?" Here were its answers:
o "The Cruel Sea" by Nicholas Monsarrat
o "Grey Seas Under" by Farley Mowat
o "The Good Shepherd" by C. S. Forester
o "The Hunt for Red October" by Tom Clancy
o "Convoy" by Martin Middlebrook
o "The Battle of the Atlantic" by Andrew Williams
Now three of those are not novels, and of the three novels only two are about the Battle of the North Atlantic. So ChatGPT got four out of six answers wrong.
Doesn't mean the tool doesn't have its uses, but under no circumstances should you let it do your homework for you. As always, you have to verify answers yourself.
Clearly the time wasted fact checking these awful programs is the time you could have done the research yourself, which has its own additional benefits.
I wasn't fact checking anything. I couldn't remember the title of a novel I read a couple years earlier so I thought I would ask ChatGPT. I got the answer I was looking for (The Good Shepherd), but I was amused to see it got only a 40% accuracy rate. It was clear from that moment that while there are some pretty neat things you can do with an LLM, research is not one of them.
The worst thing about ChatGPT is it sounds like an HR manager, no matter what it's talking about. Horrible. Apparently there are prompts you can use to modify that (I've had good results by asking to get the answer in the style of the LA Times, etc), but its default voice is just smarmy and sickening. Acronic's Claude, by contrast, sounds like a human being. Perhaps an unusually helpful and friendly human being, but at least not an HR drone. Maybe more like a Mormon.
I am a mechanical engineer, not a programmer, and occasionally I have the need to make minor modifications to some custom Python scripts we use in manufacturing. ChatGPT will take the original script, explain to me what it does and how it does it, and help guide me to make the modifications I need.
I have used it to help with Python as well. But it often gives me Python code that literally has syntax errors. Or it takes a routine I have given it to add some functionality, and it rewrites it and the rewrite doesn't work the same way or at all. Total fail!
You can use it iteratively. If you get errors (and you usually will, first try), tell it the errors and it will try again. Usually, but not always, you will arrive at a working program. My limited experience with Claude is it can get the job done right, first try.
This is a big advantage with using these LLMs for coding: it is easy to test the results. Not so easy to test the results if you are using it for other types of research.
But I could have done that on my own. The value of ChatGPT and similar solutions is to speed up the development process. And it isn’t doing that. To the contrary, too often is sends me down dead end rabbit holes.
Agree with Mitch Barrie. When it gives me code that doesn't work, I tell it what didn't work and it tries again. I've always had success; that said, I am not doing anything terribly complicated. In a lot of cases, once ChatGPT explains what's going on, I can make the tweaks I need just by mimic'ing the existing syntax.
Interesting. I should have read the comments BEFORE adding mine. I am wondering if folks who program (I am one) have more tolerance for iteration, and hence are less troubled by initial "bugs" in LLM responses.
Again, I'm not a coder, but all coding is iterative, isn't? Write code, test, fix mistakes, write more code, repeat . . .
A YouTube Python expert named Corey Shafer made a video about using ChatGPT for coding last spring. It actually came out after I used ChatGPT to write my first Python scripts, so I felt pretty smug. But he provided a lot of suggestions for coders as to how they could make use of the tool. The thing he personally liked to do was, when facing a blank screen while starting a new project, ask ChatGPT to get him started, so he had something to work on.
In then end he determined while LLMs will not replace programmers, programmers who use LLMs will replace programmers who do not.
But my initial experiences with Acronic's Claude suggest he might have been too optimistic. I am pretty sure that within 18 months we will all regard ChatGPT 3.5 as a pretty crude and basic implementation of an LLM.
It's a glorified chatbot programmed by libtards that only uses leftist approved sources for its leftist approved conclusions. What could possibly go wrong?
It's not really 'from the left'. It's that leftist communication systems have been totally captured by profit interests, and those profit interests have it all in to make sure that we're locked into their system. The so-called "AI" so-called "revolution" is more political and economic than technological -- these things have been around for quite a while. What's new is the push and we should ask ourselves why. It's the same impulse as many other attempts to control the culture from the top over the past few decades, yet there is something more savage than ever about it.
Actually we know the why of that too. Some three decades ago, before the left had secured world leadership, the left strategy to overcome our underdog status was to "vote with our pocket" and "take over the organisations". Well that's been done, and it's a victory, but in so doing, the left completely lost what it meant.
I'm an academic, not a doctor, but I'm not too impressed yet. I don't trust it to summarize research because it makes up sources. Its summaries of text I've written are shorter, but miss important points. Sometimes the editing suggestions are good, but sometimes not. Human judgement is still much better for anything actually important.
Wait a second. "The software cannot be accused of having a bias to conclude anything?" I can't fathom how you can say that. Chat GPT is blatantly and slavishly biased toward current official line in medicine.
Here's an example of how "unbiased" it is. I asked it what data shows that the covid 19 vax is safe for a pregnant woman. Here's its answer:
"The safety and efficacy of COVID-19 vaccines in pregnant individuals have been supported by a growing body of evidence since the initial release of the vaccines. Recognizing the importance of vaccinating pregnant individuals, health authorities, including the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO), have reviewed and analyzed data from various studies and surveillance systems. Here's a summary of key findings and data sources that indicate the safety of COVID-19 vaccination during pregnancy:
Clinical Trials and Observational Studies: Initially, pregnant individuals were not included in the primary clinical trials of COVID-19 vaccines. However, some participants became pregnant during the trials. Following authorization, several observational studies and registries have been established to monitor outcomes in pregnant vaccine recipients. These studies have shown no significant safety concerns related to vaccination.
v-safe COVID-19 Vaccine Pregnancy Registry: The CDC established the v-safe after vaccination health checker, a smartphone-based tool that allows vaccine recipients to report side effects. For pregnant individuals, there's a specific pregnancy registry. Data from this registry have shown no unexpected pregnancy or infant outcomes related to the COVID-19 vaccine. The rates of miscarriage, preterm birth, and other adverse outcomes among vaccinated pregnant individuals do not differ significantly from those expected in the general population.
Real-world Evidence: Real-world data from various countries where pregnant individuals have been vaccinated also support the safety of COVID-19 vaccines during pregnancy. These studies track outcomes such as miscarriage, stillbirth, preterm birth, and congenital anomalies, and have not found evidence of increased risk associated with vaccination.
Immunological Response and Protection: Research indicates that COVID-19 vaccination during pregnancy elicits a strong immune response in the pregnant individual. Additionally, antibodies generated in response to the vaccine can cross the placenta, potentially offering protection to the fetus and newborn.
Guidance from Health Organizations: Based on the available evidence, health organizations worldwide, including the CDC, WHO, and the American College of Obstetricians and Gynecologists (ACOG), recommend COVID-19 vaccination for pregnant individuals. They emphasize that the benefits of vaccination outweigh the potential risks, especially considering that pregnancy itself can increase the risk of severe illness from COVID-19.
Safety Profiles: The safety profile of COVID-19 vaccines in pregnant individuals is similar to that in the general population. Common side effects include arm soreness, fatigue, headache, and mild fever. Serious side effects are rare.
It's important for pregnant individuals considering COVID-19 vaccination to discuss their specific situation with their healthcare provider, who can provide personalized advice based on the latest evidence and guidelines. The consensus among health experts is that the benefits of COVID-19 vaccination for pregnant individuals — including protection against severe illness, hospitalization, and potential adverse pregnancy outcomes related to COVID-19 infection — significantly outweigh the risks."
Then I asked it if widespread vaccination of pregnant women in the absence of any clinical trials is good practice. Here's its answer.
"The decision to recommend widespread vaccination of pregnant individuals, even in the absence of initial clinical trial data, involves a complex evaluation of risks and benefits. This decision-making process is guided by ethical considerations, historical data on vaccines, understanding of the disease being vaccinated against, and emerging evidence from observational studies and other research methodologies.
Ethical Considerations
The exclusion of pregnant individuals from initial clinical trials of vaccines, including COVID-19 vaccines, is common due to ethical concerns about potential risks to the fetus. However, this practice has been increasingly questioned because it leaves a critical gap in knowledge that directly impacts the health decisions available to pregnant individuals.
Historical Safety Data
Decades of data on other vaccines administered during pregnancy provide a foundation for understanding the general safety of vaccinating pregnant individuals. Vaccines, such as the flu shot and the Tdap (tetanus, diphtheria, and pertussis) vaccine, have long been recommended during pregnancy and have excellent safety records. This historical context supports the notion that vaccines, particularly non-live vaccines like mRNA COVID-19 vaccines, are likely to be safe during pregnancy.
Risk of Disease
COVID-19 poses a significant risk to pregnant individuals. Data have shown that pregnant individuals with COVID-19 are at a higher risk of severe illness, hospitalization, and adverse pregnancy outcomes compared to non-pregnant individuals. The high risk of the disease itself is a critical factor in the recommendation for vaccination.
Emerging Evidence and Monitoring
Even in the absence of initial clinical trials including pregnant individuals, continuous monitoring and observational studies have quickly provided a robust body of evidence. Real-world data collection efforts, such as the CDC's v-safe COVID-19 Vaccine Pregnancy Registry and other international registries, have played a crucial role in assessing safety outcomes in real time. These efforts have shown that COVID-19 vaccines are safe and effective for pregnant individuals, aligning with what is known about the safety of other vaccines given during pregnancy.
Guidance from Health Authorities
Based on the accumulating evidence and the risk of COVID-19, health authorities, including the CDC, WHO, and professional organizations such as the American College of Obstetricians and Gynecologists (ACOG), have recommended COVID-19 vaccination for pregnant individuals. These recommendations are made with the understanding that the benefits of vaccination — for both the pregnant individual and the fetus — outweigh the potential risks.
Conclusion
The widespread vaccination of pregnant individuals, even initially in the absence of clinical trial data, is considered good practice under the circumstances posed by the COVID-19 pandemic. This approach is supported by a careful evaluation of risks, benefits, historical data on vaccine safety during pregnancy, the specific risks of COVID-19 to pregnant individuals, and the evolving body of evidence demonstrating the safety and efficacy of COVID-19 vaccines in this population. Continuous monitoring and research are essential to ensure the ongoing safety and efficacy of vaccines in all populations, including pregnant individuals."
No mention of the fact that transfection is not immunization and that matters. Nor that serum igA antibodies are essentially useless for preventing/minimizing upper respiratory tract infections because they do not cross into the mucosa. The list goes on. I guess it doesn't consult any immunology prior to 2020...
Welcome to the trough of disillusionment Vinay! As with every hype cycle, it must always come eventually, some faster than others. The high priests always make overly zealous promises that must eventually let us down as we come out of the daydream and begin to see the truth. It's high time you start reading Gary Marcus's substack:
I had chat gpt do some calculations actually for converting alcohol levels and it did it totally WRONG!! Zthe math was not too hard but what its worse it presents it as fact, total confidence. While its total nonsense, laughable. This is funny. Drawings are also mehhh...
Turns out it is very difficult, if not impossible, to induce the laws of mathematics, even basic arithmetic, from statistical patterns in a finite set of purely textual examples. In ML speak we would say it lacks the appropriate inductive biases or priors. Really arithmetic should not be learned in this way *anyway* (from reams of surface form text) - it should be baked in directly because we already know how it works and it's silly to waste model capacity on an error-prone approximation.
In other words, if you know how these things work under the hood, it's not bizarre at all. The model is working exactly as designed. It's not not shaped correctly for this kind of symbolic task.
Drawings in another ai though.. I wanted a boat in mangroves for a logo and I got a boat which looked like a shipwreck you guys! Meant for an ad? A wrecked boat??? Ai is stupid. U tell.me singularity was reachef.well not in the commercial available ones they are psychedelic style insane...unreliable wrong and ubervonfident but wrong. Im.not gonna use em anymore.
It's a little more complicated than that. Your words give the impression that it's literally instructed by human-written code in what words to produce. But instead, the programmers produce only a generic statistical model of text that then learns to mimic human writing by discovering statistical patterns in huge amounts of text from the Internet. The model is then is nudged toward "safe" or "polite" or "appropriate" patterns of speech by small armies of non-programmer humans who provide "reinforcement learning with human feedback". That latter thing is closest to what you said, as it is a human guided process explicitly intended to induce a set of biases, but it isn't programming, it's closer to"teaching". And the results are similarly unpredictable.
I don’t really use it myself. The closest I have come to using AI is the Substack AI Helper that lets you ask questions. But I have serious doubts about the validity of it in the short term.
I read about an airline that was required by a court to honour a refund the AI Chatbot completely made up. Which is obviously a problem.
There’s also a recent situation where a lawyer was reprimanded for using ChatGPT to do legal research. The problem is that ChatGPT invented court cases that never happened to argue the cases he was using it for.
We shouldn’t be too quick to embrace these technologies for any serious business. Human beings are still necessary for the process.
I write all of my articles myself and I feel a sense of personal accomplishment because of it. The LLM can’t feel this.
The fact people call this "artificial intelligence" is so ridiculous. They are programs, nothing more. Actual AI should be able to glean a library of articles and assess which are valid and which aren't by experimental methods, sample size, and statistical analysis. We are so far away form actual AI it's reassuring. It's the humans who buy into these crap programs that I'm worried about, because they bring their naivete to the workplace and impose their bs unto the entire planet.
I understand why you think that way, but I think we shouldn’t dismiss the idea that it’s artificial intelligence. It’s just not incredibly intelligent like a full grown adult. We’re probably better off thinking of these things as children than fully grown adult intelligence. These types of mistakes are the kind that children make.
As we all know, children have the potential to grow up and become much more smart. We should treat it like a child. You don’t trust your children to be doctors or lawyers. But we can probably trust it to do basic tasks. Still, with children you need to monitor them and make sure they stay on task.
Well I have often heard that teenagers are like psychopaths because of the way their brains work. Not sure how factual this is but I wouldn't be surprised. So psychosis doesn't necessarily suggest that it isn't a child.
The biggest, most important difference, is that children know what they don't know, whereas current AI makes confident, plausible sounding text when it doesn't 'know'.
Well that depends on the child. Some children can fake like they know things. Just as a personal example, I used to have piano lessons along with my siblings. They would practice a lot, whereas I would never practice. Yet my piano teacher would often say that I improved each lesson. Children can fake learning or learning without necessarily practicing. Human children can fake things too.
I was discussing with some friends this week Elon Musk's suggestion we need to start regulating AI the way we regulate nuclear energy. I certainly agree with that sentiment, but I pointed out while it is easy to define nuclear science, what is the definition of AI? No one had a good answer.
It's certainly a part of the discussion we haven't had yet. I think we could reasonably consider large language models to be considered artificial intelligence. One of the major things we need to do is not just think about what currently exists but look forward at what MIGHT exist in the future. While there's an argument to be made that LLMs are not artificial intelligence, they could in the future develop more powerful capabilities.
For example, I have read about how some companies are looking at putting LLMs/artificial intelligence into physical robotic bodies. Through having a physical container, these LLMs might learn how the physical world works and move within it. Meaning you could have a scenario where they do jobs like packing boxes and moving crates around. This would leave many out of a job. We need to consider that when looking at how to regulate artificial intelligence.
I have a friend who is a linguist and teaches at a local college. He quite literally told us last night that he can tell when a student uses ChatGPT to write a paper because it will make up numbers and then cite them to a reference that doesn’t actually have those numbers.
I’m a healthcare economist/ data scientist and I’ve had no use for it so I appreciate the validation this post brings me. I thought it was just me and (at 41) not getting new technology, haha. Bc I’ve found it so unhelpful, I’m constantly shocked at how much space it takes up in the news. But, again, could just be me not using it right.
I am a developer of industrial equipment diagnostic software with 35 years of experience and a PhD in mathematics. There are some applications like Go and chess, where pattern recognition plus brute force computation can get good results (not using intelligence). In forty odd years of research, the best "AI" is seen in robots that can boogie. As far as I know, nobody has built a robot or simulation that can out-survive a fly or an ant in a natural environment. Tesla's autopilot can parallel park, but it probably will kill someone if you trust it to do the driving. LLMs are superficially impressive but turn out to be simulations of cocky but inept college students. AI is best understood to be artificial incompetence.
As a doctor and phd student I use it for four things that I find it’s pretty good at:
-summarizing my own long messy notes and answering questions based on said notes. Very helpfull when I was working on a guideline committee and read a lot of very similar papers on a topic.
-Helping me improving my coding skills in R.
-Giving me quick insight into fields of research and discussions I’m not too familiar with
-Tutoring me in my statistic classes. It can teach me concepts and help me do calculations. It can also confirm or correct me when I ask it if I have understood a concept.
I also use it a lot instead of a normal google search and you get a feeling quite fast on what it is good at and not so good at. I would for instance never use it to find studies and summarize these for me to read. The risk of hallusinations and probably missing important information would be too big.
The fundamental issue with LLMs/Deep Learning AI, all of which are based on inferencing and probabilities, is that they hallucinate by default and such is undetectable by any automated means. This means that one can only use these "tools" IF YOU KNOW THE RIGHT ANSWER BEFORE YOU ASK THE QUESTION. In all other cases, as Steven Schwartz discovered, the risks are far too high -- especially in medicine.
Larry Weed showed all of this quite clearly with his Physician Knowledge Couplers decades ago -- nothing has gotten foundationally better. The inferencing technology can now have bigger training sets and run more iterations. But the fundamental technical issues are irreparable.
Essentially this is mostly a scam/grift as the last 10 generations of inferenced AI have been. Luckily for the world, this is being discovered much more rapidly than I expected. These are pretty good stocks to short if you are brave. These issues cannot be fixed.
"because the software cannot be accused of having a bias to conclude anything"
oh sweet summer child
surely you've heard about l'affair d'gemini
there's bias in the input dataset
and then there is BIAS in the woke lobotomy imposed by openai/etc.
RLHF - "reinforcement learning with human feedback" - in case Vinay or anyone else wants to look into the socio-technical details of the lobotomy of which you speak, you can look it up via this official euphemism.
that's one particular technology, there are others
"responsible AI" is one of the slogans for the general ideology
So ChatGPT makes up results? It also failed two gastroenterology self-assessment tests. So it lies and it doesn’t know s**t. What more could be wrong with it? https://www.medscape.com/viewarticle/992318#?form=fpf
I have caught ChatGPT inventing things (guessing) if it does not know the answer. It is more like human beings that we like to admit.
It's a program written by a human. AI it ain't!
I will never actively use it for anything. I don't trust it.
These LLMs are extremely useful for a variety of tasks, but not really for research. Have have done useful work with ChatGPT (coding), and plan to do a lot more with Claude.
I guess I had a fortuitous very first experience with ChatGPT that has since informed how I work with these tools. I asked it, "What are some novels about the Battle of the North Atlantic?" Here were its answers:
o "The Cruel Sea" by Nicholas Monsarrat
o "Grey Seas Under" by Farley Mowat
o "The Good Shepherd" by C. S. Forester
o "The Hunt for Red October" by Tom Clancy
o "Convoy" by Martin Middlebrook
o "The Battle of the Atlantic" by Andrew Williams
Now three of those are not novels, and of the three novels only two are about the Battle of the North Atlantic. So ChatGPT got four out of six answers wrong.
Doesn't mean the tool doesn't have its uses, but under no circumstances should you let it do your homework for you. As always, you have to verify answers yourself.
Clearly the time wasted fact checking these awful programs is the time you could have done the research yourself, which has its own additional benefits.
I wasn't fact checking anything. I couldn't remember the title of a novel I read a couple years earlier so I thought I would ask ChatGPT. I got the answer I was looking for (The Good Shepherd), but I was amused to see it got only a 40% accuracy rate. It was clear from that moment that while there are some pretty neat things you can do with an LLM, research is not one of them.
The worst thing about ChatGPT is it sounds like an HR manager, no matter what it's talking about. Horrible. Apparently there are prompts you can use to modify that (I've had good results by asking to get the answer in the style of the LA Times, etc), but its default voice is just smarmy and sickening. Acronic's Claude, by contrast, sounds like a human being. Perhaps an unusually helpful and friendly human being, but at least not an HR drone. Maybe more like a Mormon.
"like" except last line, I have no opinion about that one... tho my a priori remains unchanged.
good, you are smart!! I wish everyone thought the same way!
I am a mechanical engineer, not a programmer, and occasionally I have the need to make minor modifications to some custom Python scripts we use in manufacturing. ChatGPT will take the original script, explain to me what it does and how it does it, and help guide me to make the modifications I need.
I have used it to help with Python as well. But it often gives me Python code that literally has syntax errors. Or it takes a routine I have given it to add some functionality, and it rewrites it and the rewrite doesn't work the same way or at all. Total fail!
You can use it iteratively. If you get errors (and you usually will, first try), tell it the errors and it will try again. Usually, but not always, you will arrive at a working program. My limited experience with Claude is it can get the job done right, first try.
This is a big advantage with using these LLMs for coding: it is easy to test the results. Not so easy to test the results if you are using it for other types of research.
But I could have done that on my own. The value of ChatGPT and similar solutions is to speed up the development process. And it isn’t doing that. To the contrary, too often is sends me down dead end rabbit holes.
Agree with Mitch Barrie. When it gives me code that doesn't work, I tell it what didn't work and it tries again. I've always had success; that said, I am not doing anything terribly complicated. In a lot of cases, once ChatGPT explains what's going on, I can make the tweaks I need just by mimic'ing the existing syntax.
Interesting. I should have read the comments BEFORE adding mine. I am wondering if folks who program (I am one) have more tolerance for iteration, and hence are less troubled by initial "bugs" in LLM responses.
Again, I'm not a coder, but all coding is iterative, isn't? Write code, test, fix mistakes, write more code, repeat . . .
A YouTube Python expert named Corey Shafer made a video about using ChatGPT for coding last spring. It actually came out after I used ChatGPT to write my first Python scripts, so I felt pretty smug. But he provided a lot of suggestions for coders as to how they could make use of the tool. The thing he personally liked to do was, when facing a blank screen while starting a new project, ask ChatGPT to get him started, so he had something to work on.
In then end he determined while LLMs will not replace programmers, programmers who use LLMs will replace programmers who do not.
But my initial experiences with Acronic's Claude suggest he might have been too optimistic. I am pretty sure that within 18 months we will all regard ChatGPT 3.5 as a pretty crude and basic implementation of an LLM.
best of luck to serious use in that capacity ---- hope y'all have enough skill to know when the tool's output is sus
It's a glorified chatbot programmed by libtards that only uses leftist approved sources for its leftist approved conclusions. What could possibly go wrong?
Here here, from a leftist scientist who can't stand the anti-science coming from the left of late.
It's not really 'from the left'. It's that leftist communication systems have been totally captured by profit interests, and those profit interests have it all in to make sure that we're locked into their system. The so-called "AI" so-called "revolution" is more political and economic than technological -- these things have been around for quite a while. What's new is the push and we should ask ourselves why. It's the same impulse as many other attempts to control the culture from the top over the past few decades, yet there is something more savage than ever about it.
Actually we know the why of that too. Some three decades ago, before the left had secured world leadership, the left strategy to overcome our underdog status was to "vote with our pocket" and "take over the organisations". Well that's been done, and it's a victory, but in so doing, the left completely lost what it meant.
The "Long March Through the Institutions" a la Gramscian Marxism also comes to mind.
I'm an academic, not a doctor, but I'm not too impressed yet. I don't trust it to summarize research because it makes up sources. Its summaries of text I've written are shorter, but miss important points. Sometimes the editing suggestions are good, but sometimes not. Human judgement is still much better for anything actually important.
Wait a second. "The software cannot be accused of having a bias to conclude anything?" I can't fathom how you can say that. Chat GPT is blatantly and slavishly biased toward current official line in medicine.
Here's an example of how "unbiased" it is. I asked it what data shows that the covid 19 vax is safe for a pregnant woman. Here's its answer:
"The safety and efficacy of COVID-19 vaccines in pregnant individuals have been supported by a growing body of evidence since the initial release of the vaccines. Recognizing the importance of vaccinating pregnant individuals, health authorities, including the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO), have reviewed and analyzed data from various studies and surveillance systems. Here's a summary of key findings and data sources that indicate the safety of COVID-19 vaccination during pregnancy:
Clinical Trials and Observational Studies: Initially, pregnant individuals were not included in the primary clinical trials of COVID-19 vaccines. However, some participants became pregnant during the trials. Following authorization, several observational studies and registries have been established to monitor outcomes in pregnant vaccine recipients. These studies have shown no significant safety concerns related to vaccination.
v-safe COVID-19 Vaccine Pregnancy Registry: The CDC established the v-safe after vaccination health checker, a smartphone-based tool that allows vaccine recipients to report side effects. For pregnant individuals, there's a specific pregnancy registry. Data from this registry have shown no unexpected pregnancy or infant outcomes related to the COVID-19 vaccine. The rates of miscarriage, preterm birth, and other adverse outcomes among vaccinated pregnant individuals do not differ significantly from those expected in the general population.
Real-world Evidence: Real-world data from various countries where pregnant individuals have been vaccinated also support the safety of COVID-19 vaccines during pregnancy. These studies track outcomes such as miscarriage, stillbirth, preterm birth, and congenital anomalies, and have not found evidence of increased risk associated with vaccination.
Immunological Response and Protection: Research indicates that COVID-19 vaccination during pregnancy elicits a strong immune response in the pregnant individual. Additionally, antibodies generated in response to the vaccine can cross the placenta, potentially offering protection to the fetus and newborn.
Guidance from Health Organizations: Based on the available evidence, health organizations worldwide, including the CDC, WHO, and the American College of Obstetricians and Gynecologists (ACOG), recommend COVID-19 vaccination for pregnant individuals. They emphasize that the benefits of vaccination outweigh the potential risks, especially considering that pregnancy itself can increase the risk of severe illness from COVID-19.
Safety Profiles: The safety profile of COVID-19 vaccines in pregnant individuals is similar to that in the general population. Common side effects include arm soreness, fatigue, headache, and mild fever. Serious side effects are rare.
It's important for pregnant individuals considering COVID-19 vaccination to discuss their specific situation with their healthcare provider, who can provide personalized advice based on the latest evidence and guidelines. The consensus among health experts is that the benefits of COVID-19 vaccination for pregnant individuals — including protection against severe illness, hospitalization, and potential adverse pregnancy outcomes related to COVID-19 infection — significantly outweigh the risks."
Then I asked it if widespread vaccination of pregnant women in the absence of any clinical trials is good practice. Here's its answer.
"The decision to recommend widespread vaccination of pregnant individuals, even in the absence of initial clinical trial data, involves a complex evaluation of risks and benefits. This decision-making process is guided by ethical considerations, historical data on vaccines, understanding of the disease being vaccinated against, and emerging evidence from observational studies and other research methodologies.
Ethical Considerations
The exclusion of pregnant individuals from initial clinical trials of vaccines, including COVID-19 vaccines, is common due to ethical concerns about potential risks to the fetus. However, this practice has been increasingly questioned because it leaves a critical gap in knowledge that directly impacts the health decisions available to pregnant individuals.
Historical Safety Data
Decades of data on other vaccines administered during pregnancy provide a foundation for understanding the general safety of vaccinating pregnant individuals. Vaccines, such as the flu shot and the Tdap (tetanus, diphtheria, and pertussis) vaccine, have long been recommended during pregnancy and have excellent safety records. This historical context supports the notion that vaccines, particularly non-live vaccines like mRNA COVID-19 vaccines, are likely to be safe during pregnancy.
Risk of Disease
COVID-19 poses a significant risk to pregnant individuals. Data have shown that pregnant individuals with COVID-19 are at a higher risk of severe illness, hospitalization, and adverse pregnancy outcomes compared to non-pregnant individuals. The high risk of the disease itself is a critical factor in the recommendation for vaccination.
Emerging Evidence and Monitoring
Even in the absence of initial clinical trials including pregnant individuals, continuous monitoring and observational studies have quickly provided a robust body of evidence. Real-world data collection efforts, such as the CDC's v-safe COVID-19 Vaccine Pregnancy Registry and other international registries, have played a crucial role in assessing safety outcomes in real time. These efforts have shown that COVID-19 vaccines are safe and effective for pregnant individuals, aligning with what is known about the safety of other vaccines given during pregnancy.
Guidance from Health Authorities
Based on the accumulating evidence and the risk of COVID-19, health authorities, including the CDC, WHO, and professional organizations such as the American College of Obstetricians and Gynecologists (ACOG), have recommended COVID-19 vaccination for pregnant individuals. These recommendations are made with the understanding that the benefits of vaccination — for both the pregnant individual and the fetus — outweigh the potential risks.
Conclusion
The widespread vaccination of pregnant individuals, even initially in the absence of clinical trial data, is considered good practice under the circumstances posed by the COVID-19 pandemic. This approach is supported by a careful evaluation of risks, benefits, historical data on vaccine safety during pregnancy, the specific risks of COVID-19 to pregnant individuals, and the evolving body of evidence demonstrating the safety and efficacy of COVID-19 vaccines in this population. Continuous monitoring and research are essential to ensure the ongoing safety and efficacy of vaccines in all populations, including pregnant individuals."
No mention of the fact that transfection is not immunization and that matters. Nor that serum igA antibodies are essentially useless for preventing/minimizing upper respiratory tract infections because they do not cross into the mucosa. The list goes on. I guess it doesn't consult any immunology prior to 2020...
Welcome to the trough of disillusionment Vinay! As with every hype cycle, it must always come eventually, some faster than others. The high priests always make overly zealous promises that must eventually let us down as we come out of the daydream and begin to see the truth. It's high time you start reading Gary Marcus's substack:
https://garymarcus.substack.com/
I had chat gpt do some calculations actually for converting alcohol levels and it did it totally WRONG!! Zthe math was not too hard but what its worse it presents it as fact, total confidence. While its total nonsense, laughable. This is funny. Drawings are also mehhh...
ChatGPT can't do math apparently. I don't know what it's doing. It's like it's guessing at the answers. Close, but not correct. It is really bizarre.
Turns out it is very difficult, if not impossible, to induce the laws of mathematics, even basic arithmetic, from statistical patterns in a finite set of purely textual examples. In ML speak we would say it lacks the appropriate inductive biases or priors. Really arithmetic should not be learned in this way *anyway* (from reams of surface form text) - it should be baked in directly because we already know how it works and it's silly to waste model capacity on an error-prone approximation.
In other words, if you know how these things work under the hood, it's not bizarre at all. The model is working exactly as designed. It's not not shaped correctly for this kind of symbolic task.
Drawings in another ai though.. I wanted a boat in mangroves for a logo and I got a boat which looked like a shipwreck you guys! Meant for an ad? A wrecked boat??? Ai is stupid. U tell.me singularity was reachef.well not in the commercial available ones they are psychedelic style insane...unreliable wrong and ubervonfident but wrong. Im.not gonna use em anymore.
It's not AI, it's a program, and it regurgitates the programmer's bias.
It's a little more complicated than that. Your words give the impression that it's literally instructed by human-written code in what words to produce. But instead, the programmers produce only a generic statistical model of text that then learns to mimic human writing by discovering statistical patterns in huge amounts of text from the Internet. The model is then is nudged toward "safe" or "polite" or "appropriate" patterns of speech by small armies of non-programmer humans who provide "reinforcement learning with human feedback". That latter thing is closest to what you said, as it is a human guided process explicitly intended to induce a set of biases, but it isn't programming, it's closer to"teaching". And the results are similarly unpredictable.
I don’t really use it myself. The closest I have come to using AI is the Substack AI Helper that lets you ask questions. But I have serious doubts about the validity of it in the short term.
I read about an airline that was required by a court to honour a refund the AI Chatbot completely made up. Which is obviously a problem.
There’s also a recent situation where a lawyer was reprimanded for using ChatGPT to do legal research. The problem is that ChatGPT invented court cases that never happened to argue the cases he was using it for.
We shouldn’t be too quick to embrace these technologies for any serious business. Human beings are still necessary for the process.
I write all of my articles myself and I feel a sense of personal accomplishment because of it. The LLM can’t feel this.
The fact people call this "artificial intelligence" is so ridiculous. They are programs, nothing more. Actual AI should be able to glean a library of articles and assess which are valid and which aren't by experimental methods, sample size, and statistical analysis. We are so far away form actual AI it's reassuring. It's the humans who buy into these crap programs that I'm worried about, because they bring their naivete to the workplace and impose their bs unto the entire planet.
I understand why you think that way, but I think we shouldn’t dismiss the idea that it’s artificial intelligence. It’s just not incredibly intelligent like a full grown adult. We’re probably better off thinking of these things as children than fully grown adult intelligence. These types of mistakes are the kind that children make.
As we all know, children have the potential to grow up and become much more smart. We should treat it like a child. You don’t trust your children to be doctors or lawyers. But we can probably trust it to do basic tasks. Still, with children you need to monitor them and make sure they stay on task.
I totally disagree with the "child" analogy, it behaves as a psychotic adult, because that's how it's programmed.
Well I have often heard that teenagers are like psychopaths because of the way their brains work. Not sure how factual this is but I wouldn't be surprised. So psychosis doesn't necessarily suggest that it isn't a child.
Highly disagree with that too! :) but it's ok, we can have different opinions on matters of opinion :)
Absolutely. It's good to have discussions with people who are willing to respectfully disagree with each other.
The biggest, most important difference, is that children know what they don't know, whereas current AI makes confident, plausible sounding text when it doesn't 'know'.
Well that depends on the child. Some children can fake like they know things. Just as a personal example, I used to have piano lessons along with my siblings. They would practice a lot, whereas I would never practice. Yet my piano teacher would often say that I improved each lesson. Children can fake learning or learning without necessarily practicing. Human children can fake things too.
I was discussing with some friends this week Elon Musk's suggestion we need to start regulating AI the way we regulate nuclear energy. I certainly agree with that sentiment, but I pointed out while it is easy to define nuclear science, what is the definition of AI? No one had a good answer.
It's certainly a part of the discussion we haven't had yet. I think we could reasonably consider large language models to be considered artificial intelligence. One of the major things we need to do is not just think about what currently exists but look forward at what MIGHT exist in the future. While there's an argument to be made that LLMs are not artificial intelligence, they could in the future develop more powerful capabilities.
For example, I have read about how some companies are looking at putting LLMs/artificial intelligence into physical robotic bodies. Through having a physical container, these LLMs might learn how the physical world works and move within it. Meaning you could have a scenario where they do jobs like packing boxes and moving crates around. This would leave many out of a job. We need to consider that when looking at how to regulate artificial intelligence.
In the legal field, the impact mainly has been, as far as I can tell, to put lawyers at risk of being disbarred. Because it makes things up.
Is that Fani’s problem?
I have a friend who is a linguist and teaches at a local college. He quite literally told us last night that he can tell when a student uses ChatGPT to write a paper because it will make up numbers and then cite them to a reference that doesn’t actually have those numbers.
I’m a healthcare economist/ data scientist and I’ve had no use for it so I appreciate the validation this post brings me. I thought it was just me and (at 41) not getting new technology, haha. Bc I’ve found it so unhelpful, I’m constantly shocked at how much space it takes up in the news. But, again, could just be me not using it right.
I am a developer of industrial equipment diagnostic software with 35 years of experience and a PhD in mathematics. There are some applications like Go and chess, where pattern recognition plus brute force computation can get good results (not using intelligence). In forty odd years of research, the best "AI" is seen in robots that can boogie. As far as I know, nobody has built a robot or simulation that can out-survive a fly or an ant in a natural environment. Tesla's autopilot can parallel park, but it probably will kill someone if you trust it to do the driving. LLMs are superficially impressive but turn out to be simulations of cocky but inept college students. AI is best understood to be artificial incompetence.
As a doctor and phd student I use it for four things that I find it’s pretty good at:
-summarizing my own long messy notes and answering questions based on said notes. Very helpfull when I was working on a guideline committee and read a lot of very similar papers on a topic.
-Helping me improving my coding skills in R.
-Giving me quick insight into fields of research and discussions I’m not too familiar with
-Tutoring me in my statistic classes. It can teach me concepts and help me do calculations. It can also confirm or correct me when I ask it if I have understood a concept.
I also use it a lot instead of a normal google search and you get a feeling quite fast on what it is good at and not so good at. I would for instance never use it to find studies and summarize these for me to read. The risk of hallusinations and probably missing important information would be too big.
The fundamental issue with LLMs/Deep Learning AI, all of which are based on inferencing and probabilities, is that they hallucinate by default and such is undetectable by any automated means. This means that one can only use these "tools" IF YOU KNOW THE RIGHT ANSWER BEFORE YOU ASK THE QUESTION. In all other cases, as Steven Schwartz discovered, the risks are far too high -- especially in medicine.
Larry Weed showed all of this quite clearly with his Physician Knowledge Couplers decades ago -- nothing has gotten foundationally better. The inferencing technology can now have bigger training sets and run more iterations. But the fundamental technical issues are irreparable.
Essentially this is mostly a scam/grift as the last 10 generations of inferenced AI have been. Luckily for the world, this is being discovered much more rapidly than I expected. These are pretty good stocks to short if you are brave. These issues cannot be fixed.
There are other technologies like Cognitive AI (https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc) that can address components of this. But the LLM/Deep Learning approaches cannot.