AI powered legal research platform. It enables users to develop LLM according to the legal workflows. The platform provides frameworks to evaluate AI tools across practice areas.
Josef is a no-code platform designed for legal professionals to automate legal tasks, build and launch their own legal chatbots or services. It empowers lawyers, corporate counsel, and legal operations professionals to create digital legal tools.
Clearbrief is a tool designed for lawyers to evaluate legal writing in real-time, including their own work and that of opposing counsel. It aims to help lawyers prepare arguments more efficiently and communicate more effectively with judges, potentially enhancing their reputation with clients and courts. Clearbrief also offers features such as citation analysis and the ability to turn an opponent's writing into a draft response.
Trusli is an automation platform that leverages the power of large language models to automate contract reviews for in-house legal teams at enterprise organizations. We provide private AI that enhances efficiency and reduces costs, while ensuring legal teams maintain control and compliance. Trusli was acquired by Gruve AI in June 2024. We will continue to operate and serve our customers with the same commitment and excellence.
DraftWise is an AI-powered contract drafting and negotiation platform designed for transactional lawyers. It leverages a firm's existing knowledge base and past deals to improve the efficiency and accuracy of contract creation and review. DraftWise integrates with tools like Microsoft Word and document management systems to provide a unified view of a firm's collective knowledge.
FirstRead is an AI legal assistant designed for small and midsize law firms. It provides support by drafting legal documents, analyzing contracts, and managing legal tasks. It aims to increase efficiency and bandwidth for law firms without the traditional costs associated with hiring additional staff.
The new GenAI Profile reflects NIST's recommendations for implementing the risk management principles of the AI RMF specifically with respect to generative AI. This guidance is intended to assist organizations with implementing comprehensive risk management techniques for specific known risks that are unique to or exacerbated by the deployment and use of generative AI applications and systems.
Image-generating technology is accelerating quickly, making it much more likely that you will be seeing "digital replicas" (sometimes referred to as "deepfakes") of celebrities and non-celebrities alike across film, television, documentaries, marketing, advertising, and election materials. Meanwhile, legislators are advocating for protections against the exploitation of name, image, and likeness while attempting to balance the First Amendment rights creatives enjoy.
Aescape offers AI-powered robotic massages via its “Aertable,” which uses real-time feedback and a body scan system to deliver personalized experiences. With features like “Aerpoints” simulating therapist touch and “Aerwear” enhancing accuracy, Aescape addresses the massage industry’s challenges like inconsistency and therapist shortages. While expanding rapidly, it raises legal issues including liability, privacy, licensing, regulation, and IP concerns.
Large language models rely on vast internet-scraped data, raising legal concerns, especially around intellectual property. Many U.S. lawsuits allege IP violations tied to data scraping. An OECD report, Intellectual Property Issues in AI Trained on Scraped Data, examines these challenges and offers guidance for policymakers on addressing legal and policy concerns in AI training.
The integration of artificial intelligence (AI) tools in healthcare is revolutionizing the industry, bringing efficiencies to the practice of medicine and benefits to patients. However, the negotiation of third-party AI tools requires a nuanced understanding of the tool’s application, implementation, risk and the contractual pressure points.
Parents of two Texas children have sued Character Technologies, claiming its chatbot, Character.AI, exposed their kids (ages 17 and 11) to self-harm, violence, and sexual content. Filed by the Social Media Victims Law Center and Tech Justice Law Project, the suit seeks to shut down the platform until safety issues are addressed. It also names the company’s founders, Google, and Alphabet Inc. as defendants.