Arsh.ai
336 followers
- Report this post
Colorado first to pass artificial intelligence law to protect consumers - The Colorado Sun: Colorado first to pass artificial intelligence law to protect consumersThe Colorado Sun http://dlvr.it/T74TKN #ai #artificialintelligence
To view or add a comment, sign in
More Relevant Posts
-
Arsh.ai
336 followers
- Report this post
Colorado first to pass artificial intelligence law to protect consumers - The Colorado Sun: Colorado first to pass artificial intelligence law to protect consumersThe Colorado Sun http://dlvr.it/T74PQN #ai #artificialintelligence
1
Like CommentTo view or add a comment, sign in
-
Rogers Towers, P.A.
1,708 followers
- Report this post
Congratulations, Trace Jackson, on the publication of your article on AI tools in legal practice in The Florida Bar’s WCS News & 440 Report! Your forward-thinking insights are helping to shape the future of our profession. We are proud of your accomplishment and excited to see your continued impact. Read the article: https://lnkd.in/e5fw_-JdLearn more about the IP & technology law: https://lnkd.in/ejpmGMvF #artificialintelligence #ai #aiandthelaw #legalinnovation #floridabar #futureoflaw
15
2 Comments
Like CommentTo view or add a comment, sign in
-
Arsh.ai
336 followers
- Report this post
How Should Businesses Implement Artificial Intelligence Tools, Legally - The National Law Review: How Should Businesses Implement Artificial Intelligence Tools, LegallyThe National Law Review http://dlvr.it/T88HSp #ai #artificialintelligence
Like CommentTo view or add a comment, sign in
-
Luciano Floridi
Professor and Founding Director of the Digital Ethics Center, Yale University - For any information please contact Manuela Ronchi (Action Agency) +393930333228 [email protected]
- Report this post
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and CybersecurityRT @SSRN: Generative #AI in EU Law: This paper delves into the legal & regulatory implications of Generative AI & #LLMs in the European Union context.Read More: https://t.co/6mRZxXbZxuSubscribe: https://t.co/8kJY3k7Zc2
38
Like CommentTo view or add a comment, sign in
-
Allie K. Miller
Allie K. Miller is an Influencer
#1 Most Followed Voice in AI Business (1.5M) | Former Amazon, IBM | Fortune 500 AI and Startup Advisor, Public Speaker | @alliekmiller on Instagram, X, TikTok | AI-First Course with 30K+ students - Link in Bio
- Report this post
Leading legal AI research tools still hallucinate. Like, a lot. 🤖⚖️Stanford and Yale researchers tested Lexis+ AI, Westlaw AI-Assisted Research, Thomson Reuters Ask Practical Law AI, & GPT-4 on 200+ legal queries.The legal-tuned tools hallucinated on 17-33% of queries vs 43% for GPT-4. Hallucinations ranged from fabricating case holdings to misinterpreting legal actors & authority.Vertical AI is promising but hallucination-free legal AI remains elusive for now. Read more here: https://lnkd.in/eFh-5QFP
560
82 Comments
Like CommentTo view or add a comment, sign in
-
Julian Joseph
Sr. AI Product Manager, Gen-AI | Ex-Google | Mentor | Speaker (She/her)
- Report this post
Eerie timing 😀 In the second talk mesh conference As Michael Geist shares how Canada is trying to emulate the European model of penalizing hallucinations in AI legislative bots that return wrong citations. Wrong decisions with zero transparency on how those decisions were made are more harmful so it is essential for regulators to balance between getting the investment for innovation and also being transparent. Social media in search results is marked high risk for legal AI systems in Canada something EU hasn’t yet. What about US ? - Tends to be self regulated - Looking to the industry to lead- identify self harm signals- Leading investors and commercializations of AI even though Canada may done the research - focus on the litigation has been on copywriting - what goes into the systems as inputs to the LLM - without permissions from the creators and original writers - companies are trying to follow suit by saying that they should opt out of being found with the SEO - output of the LLM - may feel like copyright infringement aren’t holding ground. For eg : New York Times articles which were copied by many other websites and was made available in various sources other than the original were used in the output in its style and exact responses because of the numerous occurrences. -
1
Like CommentTo view or add a comment, sign in
-
Barys Adnaburtsau
Healthcare Data Scientist | Researcher | Passionate about pediatrics and parenting
- Report this post
AI models still hallucinate. The following study was conducted on legal #AI #research models. We should not expect the situation with healthcare AI models to be much better. We are still in the early stages of AI development.#ArtificialIntelligence #Hallucination #Legal
2
Like CommentTo view or add a comment, sign in
-
Troy Smith
Fractional CTO | AI Solution Architect: I provide strategic guidance and tech leadership to startups and small business.
- Report this post
In AI, an hallucination is a wrong or mis-leading response. AI is still in it's infancy, so this is too be expected for the foreseeable future. Anyone that uses AI to generate content should be aware of this. As an experiment, I opened this document in Adobe Acrobat, which now has an AI Assistant (in beta). I asked it to summarize this document, to which it's response was:"The document titled "Hallucination-Free?Assessing the Reliability of Leading AI Legal Research Tools" provides a comprehensive evaluation of AI-driven legal research tools, specifically focusing on LexisNexis's Lexis+ AI, Thomson Reuters's Ask Practical Law AI, and Westlaw's AI-Assisted Research.1 The evaluation reveals that while these tools claim to mitigate hallucination risks, they still exhibit a significant rate of hallucinations, ranging from 17% to 33%.The document discusses the limitations and implications of AI tools in the legal practice, emphasizing the need for lawyers to verify propositions manually and the importance of rigorous benchmarking and public evaluations.2It also highlights the challenges and variations in performance among different systems, providing examples of hallucinations and explaining their causes.3The document concludes by emphasizing the need for legal professionals to supervise and verify AI outputs and provides a typology for differentiating between hallucinations and accurate legal responses.4"The numbers in the summary were links to access the section of the document it summarized.
1
Like CommentTo view or add a comment, sign in
-
New Jersey State Bar Association
5,174 followers
- Report this post
In response to the rapid rise of artificial intelligence in the law, the NJSBA has published a wide-ranging report outlining AI’s impact on the legal profession, with practical guidance on how attorneys can benefit from AI while navigating the technology safely and ethically. The 36-page report, created by the NJSBA Task Force on Artificial Intelligence and the Law, addresses the fundamental considerations attorneys must make when implementing AI in their legal practice. Read the full report here, https://ow.ly/t3QZ50S8ru7 #NJStateBar #NJSBA #AI #ArtificialIntelligence #ArtificialIntelligenceLaw
3
Like CommentTo view or add a comment, sign in
-
Shawn Veltman
Building The Software That's Eating The World
- Report this post
Fine-tuning an LLM for legal uses just seems like exactly the wrong approach to me... In my mind, the better approach would be an agentic workflow like this (example for legal research, but could be for other areas):1. Have an agent to pull out specific references in cases that match what you're asking from cases themselves (RAG / Summarized Chunks of Text / etc to figure out specific text to analyze), giving the specific reference, why it thinks that reference fits the case, and any other cases referenced in that chunk2. Have another agent judge the output to determine if it really is relevant3. For each "other" case referenced, recursively have agents pull out the relevant data from those (or limit it to a certain "depth" or "breadth" of search (i.e. 50 cases deep, or 500 cases in total)4. Generate a report with references (both cases + line numbers / exact text to validate)Obviously this could (and should!) be extended, with added functionality Maybe add in some research tools to find other interpretations of the cases/references specifically, and determine if you want to include those in the analysis.Maybe add in a separate agent to suggest arguments based on the references, and create a whole new agentic flow to find evidence to support/detract from those arguments.There are a world of possibilities, but fine-tuning an LLM seems like a bad way to do it.
2
Like CommentTo view or add a comment, sign in
336 followers
View Profile
FollowMore from this author
- AI in Cybersecurity: Empowering Defense Against Evolving Threats Arsh.ai 1y
Explore topics
- Sales
- Marketing
- IT Services
- Business Administration
- HR Management
- Engineering
- Soft Skills
- See All