Publié le Laisser un commentaire

Forging the digital future – MIT Technology Review

As machine learning and generative AI reshape the world, MIT’s Schwarzman College of Computing is integrating these and other advanced computing technologies into classrooms and labs across campus.
Dan Huttenlocher, SM ’84, PhD ’88, leads the way up to the eighth floor of Building 45, the recently completed headquarters of the MIT Schwarzman College of Computing. “There’s an amazing view of the Great Dome here,” he says, pointing out a panoramic view of campus and the Boston skyline beyond. The floor features a high-end event space with an outdoor terrace and room for nearly 350 people. But it also serves an additional purpose—luring people into the building, which opened last January. The event space “wasn’t in the original building plan,” says Huttenlocher, Schwarzman’s inaugural dean, “but the point of the building is to be a nexus, bringing people across campus together.” 
Launched in 2019–’20, Schwarzman is MIT’s only college, so called because it cuts across the Institute’s five schools in a new effort to integrate advanced computing and artificial intelligence into all areas of study. “We want to do two things: ensure that MIT stays at the forefront of computer science, AI research, and education,” Huttenlocher says, “and infuse the forefront of computing into disciplines across MIT.” He adds that safety and ethical considerations are also critical.
To that end, the college now encompasses multiple existing labs and centers, including the Computer Science and Artificial Intelligence Laboratory (CSAIL), and multiple academic units, including the Department of Electrical Engineering and Computer Science. (EECS—which was reorganized into the overlapping subunits of electrical engineering, computer science, and artificial intelligence and decision-making—is now part of both the college and the School of Engineering.) At the same time, the college has embarked on a plan to hire 50 new faculty members, half of whom will have shared appointments in other departments across all five schools to create a true Institute-wide entity. Those faculty members—two-thirds of whom have already been hired—will conduct research at the boundaries of advanced computing and AI.
“We want to do two things: ensure that MIT stays at the forefront of computer science, AI research, and education and infuse the forefront of computing into disciplines across MIT.”
The new faculty members have already begun helping the college respond to an undeniable reality facing many students: They’ve been overwhelmingly drawn to advanced computing tools, yet computer science classes are often too technical for nonmajors who want to apply those tools in other disciplines. And for students in other majors, it can be tricky to fit computer science classes into their schedules. 
Meanwhile, the appetite for computer science education is so great that nearly half of MIT’s undergraduates major in EECS, voting with their feet about the importance of computing. Graduate-level classes on deep learning and machine vision are among the largest on campus, with over 500 students each. And a blended major in cognition and computing has almost four times as many enrollees as brain and cognitive sciences.
“We’ve been calling these students ‘computing bilinguals,’” Huttenlocher says, and the college aims to make sure that MIT students, whatever their field, are fluent in the language of computing. “As we change the landscape,” he says, “it’s not about seeing computing as a tool in service of a particular discipline, or a discipline in the service of computing, but asking: How can we bring these things together to forge something new?” 
The college has been the hub of this experiment, sponsoring over a dozen new courses that integrate computing with other disciplines, and it provides a variety of spaces that bring people together for conversations about the future of computing at MIT.
More than just a nexus for computing on campus, the college has also positioned itself as a broad-based leader on AI, presenting policy briefs to Congress and the White House about how to manage the pressing ethical and political concerns raised by the rapidly evolving technology. 
“Right now, digital technologies are changing every aspect of our lives with breakneck speed,” says Asu Ozdaglar, SM ’98, PhD ’03, EECS department head and Schwarzman’s deputy dean of academics. “The college is MIT’s response to the ongoing digital transformation of our society.” 
Huttenlocher, who also holds the title of Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and coauthored the book The Age of AI: And Our Human Future with Henry Kissinger and Eric Schmidt, has long been exploring such issues. He started programming computers back in middle school in Connecticut in the 1970s on an ASR 33 teletype machine, and eventually he studied at the University of Michigan as a double major in cognitive psychology and computer science, exploring speech recognition and visual perception. “AI work back then was relatively disconnected from the physical world,” he says. “Being interested in the perceptual side of things was kind of an outlier for what was going on in AI then.” When he looked at grad schools in the 1980s, only MIT, Carnegie Mellon, and Stanford were doing significant work in AI, he says: “I applied to those three schools and figured if it didn’t work out, I’d get a job.”
It worked out, of course. He headed to Cambridge and gravitated to MIT’s AI Lab in Technology Square, where he first worked on speech recognition and then transitioned into computer vision, at the time still in its infancy. After earning his PhD, he served simultaneously as a computer science professor at Cornell and a researcher at Xerox PARC, flying between New York and the burgeoning Silicon Valley, where he worked on computer vision for the digital transformation of copiers and scanners. “In academia, you have more curiosity-driven research projects, where in the corporate world you have the opportunity to build things people will actually use,” he says. “I’ve spent my career moving back and forth between them.”
Along the way, Huttenlocher gained administrative experience as well. He was a longtime board member and eventual chair of the MacArthur Foundation, and he also helped launch Cornell Tech, the university’s New York City–based graduate school for business, law, and technology, serving as its first dean and vice provost. When Stephen Schwarzman, CEO of the investment firm Blackstone Group, gave $350 million to MIT to establish a college of computing in 2018, he was eager to return to the Institute to lead it. “The fact that MIT was making a bold commitment to become a broad-based leader in the AI-driven age—and that it was cutting across all of its schools—was exciting,” he says. 
Schwarzman College took shape through task forces involving more than 100 MIT faculty members. By the fall of 2019 a plan had been nailed down, and Huttenlocher was in place as director with EECS head Ozdaglar named deputy dean of academics. “I never believed that everybody wants to do computer science at MIT,” she says. “Students come in with a lot of passions, and it’s our responsibility to educate these bilinguals, so they are fluent in their own discipline but also able to use these advanced frontiers of computing.” 
Ozdaglar’s background is in using machine learning to optimize communications, transportation, and control systems. Recently she has become interested in applying machine-learning algorithms to social media, examining how the choices people make when sharing content affect the information—and misinformation—recommended to them. This work builds on her longstanding interdisciplinary collaborations in the social sciences, including collaborations with her husband, economics professor (and recent Nobel laureate) Daron Acemoglu. “I strongly feel that to really address the important questions in society, these old department or disciplinary silos aren’t adequate anymore,” she says. “The college has enabled me to work much more broadly across MIT and share all that I’ve learned.”
Ozdaglar has been a driving force behind faculty hiring for the college, working with 18 departments to bring on dozens of scholars at the forefront of computing. In some ways, she says, it’s been a challenge to integrate the new hires into existing disciplines. “We have to keep teaching what we’ve been teaching for tens or hundreds of years, so change is hard and slow,” she says. But she has also noticed a palpable excitement about the new tools. Already, the college has brought in more than 30 new faculty members in four broad areas: climate and computing; human and natural intelligence; humanistic and social sciences; and AI for scientific discovery. In each case, they receive an academic home in another department, as well as an appointment, and often lab space, within the college. 
That commitment to interdisciplinary work has been built into every aspect of the new headquarters. “Most buildings at MIT come across as feeling pretty monolithic,” Huttenlocher says as he leads the way along brightly lit hallways and common spaces with large walls of glass looking out onto Vassar Street. “We wanted to make this feel as open and accessible as possible.” While the Institute’s high-end computing takes place mostly at a massive computing center in Holyoke, about 90 miles away in Western Massachusetts, the building is honey­combed with labs and communal workspaces, all made light and airy with glass and natural blond wood. Along the halls, open doorways offer enticing glimpses of such things as a giant robot hanging from a ceiling amid a tangle of wires. 
Lab and office space for faculty research groups working on related problems­—who might be from, say, CSAIL and LIDS—is interspersed on the same floor to encourage interaction and collaboration. “It’s great because it builds connections across labs,” Huttenlocher says. “Even the conference room does not belong to either the lab or the college, so people actually have to collaborate to use it.” Another dedicated space is available six months at a time, by application, for special collaborative projects. The first group to use it, last spring, focused on bringing computation to the climate challenge. To make sure undergrads use the building too, there’s a classroom and a 250-seat lecture hall, which now hosts classic Course 6 classes (such as Intro to Machine Learning) as well as new multidiscipline classes. A soaring central lobby lined with comfortable booths and modular furniture is ready-made for study sessions. 
For some of the new faculty, working at the college is a welcome change from previous academic experiences in which they often felt caught between disciplines. “The intersection of climate sustainability and AI was nascent when I started my PhD in 2015,” says Sherrie Wang, an assistant professor with a shared appointment in mechanical engineering and the Institute for Data, Systems, and Society, who is principal investigator of the Earth Intelligence Lab. When she hit the job market in 2022, it still wasn’t clear which department she’d be in. Now a part of Schwarzman’s climate cluster, she says her work uses machine learning to analyze satellite data, examining crop distribution and agricultural practices across the world. “It’s great to have a cohort of people who have similar philosophical motivations in applying these tools to real-world problems,” she says. “At the same time, we’re pushing the tools forward as well.”
Among other researchers, she plans to collaborate with Sara Beery, a CSAIL professor who analyzes vast troves of visual, auditory, and other data from a diverse range of sensors around the world to better understand how climate change is affecting distribution of species. “AI can be successful in helping human experts efficiently process terabytes and petabytes of data so they can make informed management decisions in real time rather than five years later,” says Beery, who was drawn to the college’s unique hybrid nature. “We need a new generation of researchers that frame their work by bringing different types of knowledge together. At Schwarzman, there is a clear vision that this type of work is going to be necessary to solve these big, essential problems.” 
Beery is now working to develop a class in machine learning and sustainability with two other new faculty members in the climate cluster: Abigail Bodner, an assistant professor in EECS and Earth, Atmospheric, and Planetary Sciences (whose work uses AI to analyze fluid dynamics), and Priya Donti, assistant professor in EECS and LIDS (who uses AI and computing to optimize integration of renewable energy into power grids). “There’s already a core course on AI and machine learning­—an on-ramp for people without prior exposure who want to gain those fundamentals,” says Donti. “The new class would be for those who want to study advanced AI/ML topics within the context of sustainability-­related disciplines, including power systems, biodiversity, and climate science.” 
The class on machine learning and sustainability would be part of Common Ground for Computer Education, an initiative cochaired by Ozdaglar and involving several dozen faculty members across MIT to develop new classes integrating advanced computing with other disciplines. So far, says Ozdaglar, it has generated more than a dozen new courses. One machine-learning class developed with input from nine departments provides exposure to a variety of practical applications for AI algorithms. Another collaboration, between computer science and urban studies, uses data visualization to address housing issues and other societal challenges. 
Julia Schneider ’26, a double major in AI and mathematics, took the Common Ground class on optimization methods, which she says demonstrated how computer science concepts like shortest-path algorithms and reinforcement learning could be applied in other areas, such as economics and business analytics. She adds that she values such classes because they blend her two areas of study and highlight multidisciplinary opportunities. 
“Even faculty who are leading researchers in this area say ‘I can’t read fast enough to keep up with what’s going on.’”
Natasha Hirt ’23, MEng ’23, came to MIT thinking that computer science was peripheral to her major in architecture and urban planning. Then she took a course with building technology professor Caitlin Mueller on structural optimization and design—and it changed the trajectory of her MIT career. That led her to Interactive Data Visualization and Society, a Common Ground class, and several interdisciplinary classes combining computer science and field-specific knowledge. She says these provided the perfect introduction to algorithms without delving too much into math or coding,giving her enough working knowledge to set up models correctly and understand how things can go wrong. “They are teaching you what an engine is, what it looks like, and how it works without actually requiring you to know how to build an engine from scratch,” she says, though she adds that the classes also gave her the opportunity to tinker with the engine.
She’s now working on master’s degrees in both building technology and computation science and engineering, focusing on making buildings more sustainable by using computational tools to design novel, less material-intensive structures. She says that Common Ground facilitates an environment where students don’t have to be computer science majors to learn the computational skills they need to succeed in their fields. 
And that’s the intent. “My hope is that this new way of thinking and these educational innovations will have an impact both nationally and globally,” Ozdaglar says.
The same goes for recent papers MIT has commissioned, both on AI and public policy and on applications of generative AI. As generative AI has spread through many realms of society, it has become an ethical minefield, giving rise to problems from intellectual-property theft to deepfakes. “The likely consequence has been to both over- and under-­regulate AI, because the understanding isn’t there,” Huttenlocher says. But the technology has developed so rapidly it’s been nearly impossible for policymakers to keep up. “Even faculty who are leading researchers in this area say ‘I can’t read fast enough to keep up with what’s going on,’” Huttenlocher says, “so that heightens the challenge—and the need.”
The college has responded by engaging faculty at the cutting edge of their disciplines to issue policy briefs for government leaders. First was a general framework written in the fall of 2023 by Huttenlocher, Ozdaglar, and the head of MIT’s DC office, David Goldston, with input from more than a dozen MIT faculty members. The brief spells out essential tasks for helping the US maintain its AI leadership, as well as crucial considerations for regulation. The college followed that up with a policy brief by EECS faculty specifically focusing on large language models such as ChatGPT. Others dealt with AI’s impact on the workforce, the effectiveness of labeling AI content, and AI in education. Along with the written documents, faculty have briefed congressional committees and federal agencies in person to get the information directly into the hands of policymakers. “The question has been ‘How do we take MIT’s specific academic knowledge and put it into a form that’s accessible?’” Huttenlocher says. 
On a parallel track, in July of 2023 President Sally Kornbluth and Provost Cynthia Barnhart, SM ’86, PhD ’88, issued a call for papers by MIT faculty and researchers to “articulate effective road maps, policy recommendations, and calls for action across the broad domain of generative AI.” Huttenlocher and Ozdaglar played a key role in evaluating the 75 proposals that came in. Ultimately, 27 proposals­—exploring the implications of generative AI for such areas as financial advice, music discovery, and sustainability—were selected from interdisciplinary teams of authors representing all five schools. Each of the 27 teams received between $50,000 and $70,000 in seed funds to research and write 10-page impact papers, which were due by December 2023. 
Given the enthusiastic response, MIT sent out another call in the fall of 2023, resulting in an additional 53 proposals, with 16 selected in March, on topics including visual art, drug discovery, and privacy. As with the policy briefs, Huttenlocher says, “we are trying to provide the fresher information an active researcher in the field would have, presented in a way that a broader audience can understand.”
Even in the short time the college has been active, Huttenlocher and Ozdaglar have begun to see its effects. “We’re seeing departments starting to change some of the ways they are hiring around degree programs because of interactions with the college,” Huttenlocher says. “There is such a huge acceleration of AI in the world—it’s getting them to think with some urgency in doing this.” Whether through faculty hiring, new courses, policy papers, or just the existence of a space for high-level discussions about computing that had no natural home before, Huttenlocher says, the college hopes to invite the MIT community into a deeper discussion of how AI and other advanced computing tools can augment academic activities around campus. MIT has long been a leader in the development of AI, and for many years it has continued to innovate at the cutting edge of the field. With the college’s leadership, the Institute is in a position to continue innovating and to guide the future of the technology more broadly. “The next step,” says Ozdaglar, “is to take that impact out into the world.”
This story was part of our January/February 2025 issue.
A two-hour interview is enough to accurately capture your values and preferences, according to new research from Stanford and Google DeepMind.
The game was created from clips and keyboard inputs alone, as a demo for real-time interactive video generation.
Rapid advances in applying artificial intelligence to simulations in physics and chemistry have some people questioning whether we will even need quantum computers at all.
Vertical farms, woke AI, and 23andMe made our annual list of failed tech.
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.

© 2024 MIT Technology Review

source

Publié le Laisser un commentaire

Unit groups in a RTS – GameDev.net

Please contact us if you have any trouble resetting your password.
In Starcraft when an AI Player is attacking you what you have is a thin constant stream of units arriving at your base. What does it take to build an alternative? In Total Annihilation the AI player waits until a group of units has pilled up and then attacks. If the attack is unsuccesful the remaining attacking units retreat. The Total Annihilation approach probably works the same as Starcraft except the retreat. The maps layout in Starcraft probably causes the units to spread like a stream. How does a typical RTS AI player attack work? Select all combat units, Attack-move selected combat units to the enemy base?
My project`s facebook page is “DreamLand Page”
There have been a lot of variation in games over the decades, so there really isn’t a “typical”.
Some games have mini-formations, like a group of 5 soldiers may cluster and move as a group taking a single tile. Some games have included commands to keep a formation, so you could place them in a particular ordering and they’d advance at a uniform speed based on the slowest units, such as creating a line of soldiers and they’d hold the line as they travel or attack or retreat. Some games have included automatically-issued commands causing units to scatter or evade defensively, or standing orders to hold position or not return fire rather than evade or counterattack.
The logic is typically implemented as nested state machines. Logic may be to move toward a target, patrol between points, react to getting attacked taking any of various sub-choices depending on what is attacking, and much more. The limit is usually more about development budget and time than it is about creativity, as designers with unlimited time and money could devise a tremendous number of commands and flows.
There are also hundreds of articles over the decades with all kinds of explanations of grouping behavior, flocking behavior, pathing together vs pathing independently, treating clusters as a single unit for pathfinding instead of a bunch of tiny objects all doing their own search, and much more.
There have been a lot of variation in games over the decades
I see what you’re saying. Age of Empires 2 had formations. That’s a bit more intricate than just select and move everyone to the enemy base.
My project`s facebook page is “DreamLand Page”
Even that, “select everyone and move to the enemy base”, do you want them to stay together, or do you want some type of strategic organization?
Do you want the fastest moving units to arrive first, before the slowest, or do you want them to arrive as a cohesive whole? There are merits to both.
Do you want new units to arrive and attempt to join the formation as a cohesive whole, or just wherever you clicked?
Do you want newly formed units to replace the ones that were lost in the formations, or to meet at some other waypoint?
All are good ideas, and all require implementation.
Even that, “select everyone and move to the enemy base”, do you want them to stay together, or do you want
If it’s the most basic scenario ever there’s just one way to do it.
My project`s facebook page is “DreamLand Page”
Calin said:
If it’s the most basic scenario ever there’s just one way to do it.
That’s the grand picture, the simple idea on top, the lower frequencies, the highest level abstraction, or however you call it.
But below that lurk the depths, the details, the complexities, and with it comes the opportunity to add depth to the game.
At least that’s how you can think about it. While in reality it’s mostly just a pile of unexpected work, i would say.
Anyway, what i try to say is that your quote does not hold – if we take a closer look.
Even if all my units are supposed to just walk to the enemy base, there is never only one way to do it, if there are multiple units.
As the player i will observe their behevior. I will notice how they prevent collisions with each other etc., and i will judge if it looks natural or silly.
Ideally i’ll enjoy the observation. In german they call it ‘Wuselfaktor’, meaning to watch hundreds of cute tiny settlers building a village or having some daily routines. It’s important players find this satisfying to watch, and i guess the same also applies to other top down genres like RTS.
Your latest video showed good progress on this. The behavior looked much smoother, and no more visible ‘silly computer AI’ corner cases.
But improvement is still possible, and will become necessary once better graphics expose more details.
Now i see two options to continue:
1. Decide about features you want. E.g. if you want clearly defined formations, work on that. E.g. let them form a N x M grid pattern and they should keep that in shape while moving along a common path.
2. Don’t implement such features yet, but instead improve the default behavior so it looks more natural and units appear smarter. Eventually you’ll have tuning parameters to define the behavior, and simply changing those parameters will affect if they walk in a straight line if they have a commonm goal, or if they form a blob shape. (If you have terrrain, this should also affect those things.)
You said you want to make a generic RTS AI, iirc.
But sooner or later you get down to the depths of all those details no matter what, and at this point it becomes easier to make just one specific game instead some genric default AI supposed to serve any game design.

source

Publié le Laisser un commentaire

PIE News: International Higher Education: US Sector Predictions for 2025 – Fragomen

Our Services
Our Tech & Innovation
Our People
Our Insights
Spotlights
About Us
Select Language
December 23, 2024
Aaron Blumberg
Partner


[email protected]
T:+1 786 539 1731

Share
Aaron Blumberg
Partner


[email protected]
T:+1 786 539 1731

Share
Aaron Blumberg
Partner


[email protected]
T:+1 786 539 1731

Share
Partner Aaron Blumberg, discussed with The PIE News potential challenges for international students as the US prepares for policy changes under the incoming administration. He highlighted the contributions these students make to research, academic institutions and cultural exchange. Aaron noted the importance of fostering an environment where international students can continue to study, work and live in the US. Despite uncertainties, he underscored the resilience of the international education community and its capacity to navigate change.
Read more here.
Aaron Blumberg
Partner


[email protected]
T:+1 786 539 1731

Share
Aaron Blumberg
Partner


[email protected]
T:+1 786 539 1731

Share
Aaron Blumberg
Partner


[email protected]
T:+1 786 539 1731

Share
Fragomen news
Blog post
Senior Learning & Advisory Specialist Mona Ahmed and Junior Learning & Advisory Specialist Wisal Fahd Abduldaim discuss the UAE’s Blue Visa initiative.
Media mentions
Partner Rahul Soni and Associate Nathalia Carneiro discuss the advantages of the EB-5 program for Brazilian nationals. 
Media mentions
Director Kelly Chua discusses how the UK’s ETA scheme will expand to more countries in early 2025.
Video
#FragomenFC podcast hosts go “around the horn” highlighting the biggest hits from the soccer world this year.
Video
Partner Aim-on Larpisal shares an overview of visa options for digital nomads in Thailand. 
Video
Partner Catherine Macris discusses the expansion of the UK’s ETA from the perspective of a US citizen
Fragomen news
Fragomen’s Semiconductor Industry Group launches the “Voice of the Industry” Q&A series, debuting with AMD’s Suzanne Plummer.
Fragomen news
Partner Mitch Wexler is recognized on EB5 Investors Magazine’s Top 25 Immigration Attorneys 2024 list.
Media mentions
Partner Mitch Wexler shares how newly finalised rules to modernise the H-1B visa program will benefit entrepreneurs, strengthen program integrity measures and ensure consistency in the visa extensions.
Fragomen news
Associate Ana Gabriela Urizar is recognized by Legado Latino of Westchester with the Trailblazer Award for her commitment to uplifting the Latino community through legal work.
Media mentions

Fragomen news
Blog post
Senior Learning & Advisory Specialist Mona Ahmed and Junior Learning & Advisory Specialist Wisal Fahd Abduldaim discuss the UAE’s Blue Visa initiative.
Media mentions
Partner Rahul Soni and Associate Nathalia Carneiro discuss the advantages of the EB-5 program for Brazilian nationals. 
Media mentions
Director Kelly Chua discusses how the UK’s ETA scheme will expand to more countries in early 2025.
Video
#FragomenFC podcast hosts go “around the horn” highlighting the biggest hits from the soccer world this year.
Video
Partner Aim-on Larpisal shares an overview of visa options for digital nomads in Thailand. 
Video
Partner Catherine Macris discusses the expansion of the UK’s ETA from the perspective of a US citizen
Fragomen news
Fragomen’s Semiconductor Industry Group launches the “Voice of the Industry” Q&A series, debuting with AMD’s Suzanne Plummer.
Fragomen news
Partner Mitch Wexler is recognized on EB5 Investors Magazine’s Top 25 Immigration Attorneys 2024 list.
Media mentions
Partner Mitch Wexler shares how newly finalised rules to modernise the H-1B visa program will benefit entrepreneurs, strengthen program integrity measures and ensure consistency in the visa extensions.
Fragomen news
Associate Ana Gabriela Urizar is recognized by Legado Latino of Westchester with the Trailblazer Award for her commitment to uplifting the Latino community through legal work.
Media mentions

Stay in touch
Subscribe to receive our latest immigration alerts
Our firm
Information
Our firm
Information
Have a question?
© 2024 Fragomen, Del Rey, Bernsen & Loewy, LLP, Fragomen Global LLP and affiliates. All Rights Reserved.
Please note that the content made available on this site is not intended for visitors / customers located in the province of Quebec, and the information provided is not applicable to the Quebec market. To access relevant information that applies to the Quebec market, please click here.

source

Publié le Laisser un commentaire

Federal Judge strikes down parts of Arkansas law intended to regulate library books – talkbusiness.net

by Michael Tilley ([email protected]) 0 views 
U.S. District Court Judge Timothy Brooks on Monday (Dec. 23) issued an order striking down key provisions in Act 372, an attempt by the Arkansas Legislature to censor library books. Brooks in July 2023 issued an injunction blocking implementation of the act.
Arkansas Attorney General Tim Griffin said he respects the judge’s ruling, but plans to appeal.
Several Arkansas libraries, library associations, bookstore owners, and booksellers in early 2023 sued the state in an effort to overturn Act 372, which was set to become law on Aug. 1, 2023.
The lawsuit was also connected to an attempt by Crawford County officials to relocate certain books to a “social section.” U.S. District Court Judge P.K. Holmes III ruled in September against Crawford County in a First Amendment lawsuit regarding the removal and relocation of books largely because of objections from citizens to LGBTQ content.
Specifically, Act 372 creates a process for books to be challenged in public libraries, with library officials having the option to appeal the challenge with a local and/or city government. It also removes the exemption protecting librarians from criminal penalties if they are found to have knowingly provided certain materials to minors. Republicans who successfully pushed for Act 372 said the law was needed to protect minors from certain books, with many of the books mentioned being LGBTQ-related.
Plaintiffs in the lawsuit filed within the Fayetteville Division of the Western District Court of Arkansas include the Fayetteville Public Library, the Central Arkansas Library System, the Eureka Springs Carnegie Public Library, and the American Booksellers Association. Defendants listed include Crawford County Judge Chris Keith, members of the Crawford County Public Library Board, and all Arkansas prosecuting attorneys.
According to the plaintiffs, Act 372 limits access to constitutionally protected materials, violates constitutionally protected free speech, violates due process, and lacks a judicial review of decisions to ban or relocate library items.
Brooks, based in Fayetteville with the Western District of Arkansas, largely agreed with arguments set forth by plaintiffs that the law was vague and allowed unconstitutional censorship by library committees and local governments.
“Moreover, if a library committee or local governmental body elected to relocate a book instead of withdrawing it, Section 5 contemplates moving the book ‘to an area that is not accessible to minors under the age of eighteen (18) years’ – without defining what ‘accessible to minors’ means,” Brooks noted in part of his ruling. “If Section 5 were to take effect, libraries would have to guess what level of security would be necessary to satisfy the law’s ‘[in]accessib[ility]’ requirements. For all of these reasons, the Court finds that Section 5 fails the ‘stringent vagueness test’ that applies to a law that interferes with access to free speech.”
In another section of the ruling, Brooks noted: “The Court therefore concludes that Plaintiffs have established as a matter of law that Section 5 would permit, if not encourage, library committees and local governmental bodies to make censorship decisions based on content or viewpoint, which would violate the First Amendment.”
Gov. Sarah Sanders, who praised passage of Act 372, said she will work with Griffin on the appeal.
“Act 372 is just common sense: schools and libraries shouldn’t put obscene material in front of our kids. I will work with Attorney General Griffin to appeal this ruling and uphold Arkansas law,” Sanders said.
Holly Dickson, ACLU of Arkansas executive director, said the judge’s order “ensures that libraries remain sanctuaries for learning and exchange of ideas and information.”
“This was an attempt to ‘thought police,’ and this victory over totalitarianism is a testament to the courage of librarians, booksellers, and readers who refused to bow to intimidation,” she said in a statement.
Link here for a PDF of Brooks’ order.
Arkansas headlines delivered
to you on demand



by Erik Dees
by Jennifer James
by Ross DeVol
by Keith Lau
by Rodrigo Salas and Bethany Carter








7,268 views
6,433 views
5,660 views
2,205 views
1,669 views
ADMAG logo
Talk Business & Politics is a news website that covers business, politics and culture in Arkansas. You can also sign up for daily e-mail news delivered every morning to your inbox.

by Kim Souza
by Talk Business & Politics staff

source

Publié le Laisser un commentaire

Evaluating the performance of health care artificial intelligence (AI): the role of AUPRC, AUROC, and average precision – Kevin MD

As artificial intelligence (AI) becomes more embedded in health care, the ability to accurately evaluate AI models is critical. In medical applications, where early diagnosis and anomaly detection are often key, selecting the right performance AI metrics can determine the clinical success or failure of AI tools. If a health care AI tool claims to predict disease risk or guide treatment options, it must be rigorously validated to ensure its outputs are true representations of the medical phenomena it assesses. In evaluating health care artificial intelligence, two critical factors, validity and reliability, must be considered to ensure trustworthy AI systems.
When using medical AI, errors are inevitable, but understanding their implications is vital. False positives occur when an AI system incorrectly identifies a disease or condition in a patient who does not have it, leading to unnecessary tests, treatments, and patient anxiety. False negatives, on the other hand, occur when the system fails to detect a disease or condition that is present, potentially delaying critical interventions. These types of errors, known as Type I and Type II errors, respectively, are particularly relevant in AI systems designed for diagnostic purposes. Validity is crucial because inaccurate predictions can lead to inappropriate treatments, missed diagnoses, or overtreatment, all of which compromise patient care. Reliability, the consistency of an AI system’s performance, is also substantially important. A reliable AI model will produce the same results when applied to similar cases, ensuring that physicians can trust its outputs across different patient populations and clinical scenarios. Without reliability, physicians may receive conflicting or inconsistent recommendations from AI health care tools, leading to confusion and uncertainty in clinical decision-making.
A physician must focus on three important AI metrics: 1) area under the precision-recall curve (AUPRC), 2) area under the receiver operating characteristic curve (AUROC), and 3) average precision (AP), and how they apply to health care AI models. In health care, many AI predictive tasks involve imbalanced datasets, where the positive class (e.g., patients with a specific disease) is much smaller than the negative class (e.g., healthy patients). This is often the case in areas like cancer detection, rare disease diagnosis, or anomaly detection in critical care settings. Traditional performance metrics may not fully capture how well an AI model performs in such situations, particularly when the rare positive cases are the most clinically significant.
In binary classification, where an AI model is tasked with predicting whether a patient has a certain condition or not, choosing the right metric is crucial. For instance, an AI model that predicts “healthy” for nearly every case might score well on accuracy but fail to detect the rare but critical positive cases. This makes AI metrics like AUPRC, AUROC, and AP particularly valuable in evaluating how well an AI system balances identifying true positives while minimizing false positives and negatives.
Area under the precision-recall curve (AUPRC) is a performance metric that is particularly well-suited for imbalanced classification tasks, such as health care anomaly detection or disease screening. AUPRC summarizes the trade-offs between precision (the percentage of true positive predictions out of all positive predictions) and recall (the percentage of actual positive cases correctly identified). It is especially useful in scenarios where finding positive examples, such as identifying cancerous lesions or predicting organ failure, is of utmost importance.
AUPRC is particularly relevant in AI health care because precision is critical, especially when treatments or interventions can have negative consequences. Recall is essential when missing a true positive, such as a missed cancer diagnosis, could be life-threatening. By focusing on these two AI metrics, AUPRC provides a clearer picture of how well an AI model performs when the goal is to maximize correct positive classifications while keeping false positives in check. For example, in the context of sepsis detection in the ICU, where early and accurate detection is crucial, a high AUPRC indicates that the AI model can identify true sepsis cases without overwhelming clinicians with false positives.
While AUPRC is valuable for evaluating AI systems in imbalanced datasets, another common AI metric is the area under the receiver operating characteristic curve (AUROC). AUROC is often used in binary classification tasks because it evaluates both false positives and false negatives by plotting the true positive rate against the false positive rate. However, AUROC can be misleading in imbalanced datasets where the majority class (e.g., healthy patients) dominates the predictions. In such cases, AUROC may still give a high score even if the AI model is performing poorly in detecting the minority positive cases.
For example, in a cancer screening program where the prevalence of cancer is very low, an AI model that predicts “no cancer” for most cases could still score well on AUROC despite missing a significant number of true cancer cases. In contrast, AUPRC would give a more accurate reflection of the model’s ability to find the rare positive cases. That said, AUROC is still valuable in situations where both false positives and false negatives carry significant costs. In applications like early cancer screening, where missing a diagnosis (false negative) can be just as costly as over-diagnosis (false positive), AUROC may be a better choice for evaluating AI model performance.
Another important AI metric is average precision (AP), which is commonly used as an approximation for AUPRC. While there are multiple methods to estimate the area under the precision-recall curve, AP provides a reliable summary of how well an AI model performs across different precision-recall thresholds. AP is particularly useful in health care applications where anomaly detection is key. For instance, in predicting hypotension during surgery, where early detection can prevent life-threatening complications, the AP score provides insight into the AI system’s effectiveness in catching such anomalies early and with high precision.
There are different ways to estimate the area under the precision-recall curve (AUPRC), with the trapezoidal rule and average precision (AP) being two of the most common. While both methods are useful, they can produce different results:
For AI health care applications like cardiac arrest prediction, where precise detection is vital, AP often gives a clearer picture of the AI model’s ability to balance precision and recall effectively. Physicians must be aware that in health care, making clinical decisions based on AI predictions requires a deep understanding of how well the AI model performs in rare but critical situations. AUPRC may be suited to evaluating AI models designed to detect rare conditions, such as cancer diagnosis, sepsis detection, and hypotension prediction, where a high AUPRC score ensures that the AI system is catching these rare events while minimizing false alarms that could distract clinicians.
In summary, the evaluation of AI models in health care requires careful consideration of which AI metrics provide the most meaningful insights. For tasks involving imbalanced datasets common in health care applications such as disease diagnosis, anomaly detection, and early screening, AUPRC offers a more targeted and reliable assessment than traditional AI metrics like AUROC. By focusing on precision and recall, AUPRC gives a more accurate reflection of an AI system’s ability to find rare but important positive cases, making it an essential tool for evaluating AI in medical practice. Average precision (AP) also serves as a valuable approximation of AUPRC and can provide even more precise insights into how well an AI system balances precision and recall across varying thresholds. Together, these AI metrics empower clinicians and researchers to assess the performance of AI models in real-world health care settings, ensuring that AI tools contribute effectively to improving patient outcomes.
Neil Anand is an anesthesiologist.
Health IT
Get free updates delivered free to your inbox.
Search thousands of physician, PA, NP, and CRNA jobs now.
Learn more
Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.
Comments are moderated before they are published. Please read the comment policy.

source

Publié le Laisser un commentaire

Herbs and wild foraging with Dr. Bob Linde (Acupuncture and Herbal Therapies) – WMNF

On this week’s Sustainable Living show, Anni Ellis is joined by Dr. Bob Linde of Acupuncture and Herbal Therapies and Traditions School of Herbal Studies to discuss wild foraging and herbal medicine. Dr. Bob, Registered Herbalist and Acupuncture Physician, has worn many hats in his life. From commercial lobster and conch diver, treasure hunter, to infantryman in Desert Storm, and Green Peace worker, he is the owner of Acupuncture & Herbal Therapies, a multi-practitioner/multimodality practice in St. Petersburg Florida, and the Founder and Clinical Supervisor of Traditions School of Herbal Studies, also in St. Pete.
Topics discussed include:
-how Dr. Bob found his way into herbalism
-helpful herbs for colds and flu
-herbal and medicinal plants in Florida
-foraging your own edibles and medicinals
-kitchen medicine
-safe and sustainable foraging/harvesting practices
and more!
Find out more about Dr. Bob, his herbal practice, educational events and school at Acupuncture and Herbal Therapies and Traditions School of Herbal Therapies.
If you love the Sustainable Living Show, make sure to tune in every Monday at 11am on 88.5fm or listen to past episodes in the archives here. You can also stay up to date with show happenings on our Facebook page. Head over to the tip jar and direct your donation to Sustainable Living to show your monetary support. Remember, it takes a community to build a community.



XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>


Herbs and wild foraging with Dr. Bob Linde (Acupuncture and Herbal Therapies)
The Scoop for Monday Monday, December 23th, 2024
SOTD 12/23/24: Hey! Merry Christmas! by The Mavericks
Open carry bill draws backlash
Mouth Harps and Mangroves – Leon Majcen
You may also like

Matt Gaetz report will be released today; CBS News says…
Listen: A controversial bill filed for the upcoming legislative session…
New social media law goes into effect Jan. 1 A…
Hundreds of amateurs and pros are competing this week at…
WMNF is listener-supported. That means we don’t advertise like a commercial station, and we’re not part of a university.
WMNF volunteers have fun providing a variety of needed services to keep your community radio station alive and kickin’.


Proud member of Project Galileo and protected by Cloudflare
WMNF is a registered 501(с)3 charitable organization – The Nathan B. Stubblefield Foundation, Inc. | Online Public File
2024 WMNF 88.5 FM
Nothing Found
Have a request? Or if you just wanna let our DJ know you’re enjoying their show.

Call the DJ: 813-239-9663
Email the DJ: [email protected]
Text the DJ: 813-433-0885
Signup below to get The Scoop delivered every weekday to your email inbox.

WMNF advises you to check with the individual venues or event planners listed before heading to anything posted here.  There is a high likelihood that many events have been or will be canceled or postponed.
Thank you for visiting wmnf.org ~


Circle of Friends LogoNEW
WMNF is non-commercial community conscious radio – we survive only due to the generosity of listeners like you. Please consider a sustaining monthly contribution and receive annual coupons for swag when you Join the WMNF Circle of Friends.

source

Publié le Laisser un commentaire

What we know about the PiCoin craze in Northern Nigeria – TechCabal

Horsemen in action during the 2019 Durbar festival in Kano. Image source: Khalid' Ozavogu Abdul (@OzavoguAbdul on Twitter)
On Wednesday, members of a closed cryptocurrency support Telegram channel, held a meeting across four states in the northern region of Nigeria—Kano, Niger, Sokoto, and Katsina—on how they could enjoy the full promises of the Pi Network, an entity that describes itself on its website as “the first and only digital currency you can mine on your phone. The resolution, according to Bashir Zhamani*, 27, who was part of one the meetings held in Suleja, a Niger state city, was to build a Pi chain mall in Kano, Nigeria where people can shop using their PiCoin, a digital currency mined on the Pi Network app. 
“Pioneers in China are already selling and buying phones and cars with their Pi,” he said. “We need to also start doing it here.” Pioneer is the name Pi Network users are called.
There are reports of malls scattered across Asia receiving Pi as payment and myriads of social media posts corroborating that—tweets about Asians, especially Chinese, paying for electronic gadgets with their PiCoin. Some Nigerians in the north of the country have also reportedly acquired cars and phones using their PiCoin. However, for a token that has no official price and hasn’t started trading yet, the  buzz is curious.
PiCoin was founded by three Stanford students, Vince McPhilip, Chengdiao Fan and Nicolas Kokkalis in 2013, and its web app launched in 2014. The mobile app was released on March 14, 2019, allowing users to download it and earn tokens from their phones. Individuals could earn thousands of Pi tokens by pressing a green flash symbol on the app hourly. The process was simple and fulfilled the purpose of democratising crypto, as it required no monetary investment or strong technological know-how, unlike bitcoin and ethereum. As a result, it quickly acquired lots of users. In fact, as of June, the network said it has crossed the 35 million user mark.
For about eight years, between 2014 till this August, the network was at mining or pre-mainnet phase, meaning users could only mine and keep. This phase took so long that it bored out its enthusiasts and they abandoned the app en masse while some even declared it a waste of time. “I stopped mining for two years,” said Debo, a crypto enthusiast who resides in Ilorin, a capital city in the north-central region of Nigeria. But in March, when the network announced that users can now carry out a know-your-customer (KYC), a requirement to move to the next level, which is the mainnet phase, enthusiasm was rekindled.
The mainnet phase is simply the stage where actions can be carried on the coin within the network. Users can, for instance, perform P2P transactions with other pioneers who have also reached the mainnet stage. According to the network, the goals of the Mainnet phase are to make further progress in decentralisation and utilities, ensure stability and longevity, and retain growth and security. 
Zhamani, with 700 Pi hasn’t been verified yet while some pioneers he brought onto the network, including some who have as low as 60 Pi, have been verified. “Verification is by luck. Looks like the core team is using the KYC to screen out people,” Zhamani said. The core team is the team behind the Pi Network. Verification can take between five days to six months after means of identification is submitted. Debo, on the other hand, has been verified, and it took 2 weeks.
Debo has moved all his 1,700 Pi into the mainnet. But he said he has locked them for another three years. “I believe PiCoin has the tendency to be almost as valuable as the top crypto in the next three years, so I will wait until it launches fully,” he told TechCabal. Debo is betting the same way he did Solana, a coin he said gave him his first million. 
But not everybody can wait like Debo. For Audu*, a primary school teacher in a public school in Kano who also got verified into the Mainnet, selling is the best option as he doesn’t want to wait another three years to get monetary value. He entered the Mainnet with about 2500 Pi, and as soon as he knew he could do P2P transactions, he sold off more than half of it. “I still have 1000 Pi to sell or keep, but I’m glad to have made money from the effort,” he told TechCabal. 
A Pi was selling for ₦350 in August when the Mainnet migration started, then it dropped to  ₦300, and currently at  ₦150. Audu was able to sell his coin for  ₦300/Pi. This means that he made ₦450,000 from a piece of currency he acquired from a few taps on some app. He mentioned that his wife has more Pi than him and has made close to a million naira. Now that the price has come down by half of what they sold for, they are both considering holding on to the rest of the coin till whenever the next phase arrives and values go up. But for them, nothing is carved in stone and it feels good to know they can always pull up an app and sell off a few assets whenever they need money. 
For those buying Pi, there are two reasons: one, reselling to Asians who use it to acquire actual products; second, to collect and hold until it becomes the next bitcoin. This was confirmed to TechCabal by Soft, a crypto trader who asked for his real name to be hidden due to privacy concerns. “There’s no other reason outside of these two reasons because the coin can only stay within the network for now,” Soft said. 
The Pi Network network uses a halving system, the same system bitcoin and other blockchain use to limit the supply of the coin. The network essentially halves the amount of Pi given in rewards after reaching certain milestones. For instance, users could mine 1.6 Pi when it first started, but as of now, the user count is in the tens of millions; only 0.2 Pi can be mined per hour. Once the token has 1 billion users, the mining rewards become 0.
Few days before the group meeting in Kano, a Twitter account named pen_griffen shared a image announcing the registration of a business called Arewa Pi Mega Mall with the Corporate Affairs Commission (CAC). While the image looks doctored, a quick search on the corporate registry returned that the business is truly registered. 
This level of enthusiasm is not strange in the cryptocurrency world as almost all tokens and shitcoin passed through this hyper phase where holders dream of an utopia powered by their newly-found and beloved coin, before they go bust. But unlike most shitcoin that were either rug-pulled or went bust due to their lack of utility within a few months or year that they launched, the PiCoin core team has taken their time to build their network. In fact, they have announced that they aren’t offering any initial coin offering (ICO), meaning they won’t be selling it. So anybody who wants a piece of the pie must download the Pi app, and the roadmap on its white paper is very clear to this.
Regardless of how good this coin may sound, whether it’s another crypto heaven or hell waiting to be unleashed is still unknown. But if there’s one thing we know about the cryptoverse, time will always tell. 

source