NeatoCal is a tiny JavaScript app that outputs a printable calendar with a full year on a single page. I love the view where all the weekends line up.
NeatoCal is a tiny JavaScript app that outputs a printable calendar with a full year on a single page. I love the view where all the weekends line up.
We aren’t just using these AI tools as assistant anymore; they’re fixing code bugs on their own, making full movies from a sentence, and staying focused for days without forgetting the plan. We went from having helpful assistants to creating actual digital coworkers in less than a year.
The biggest thing that happened in 2025? Specialisation. The big tech companies finally stopped pretending one “super brain” could do everything perfectly and started building specialists instead. It’s way better this way because now picking a model is just like hiring a pro: you don’t hire a plumber to do your taxes.
Whether you need a poet, a mathematician, or a filmmaker, the question isn’t “which AI is smartest” anymore—it’s just about picking the right tool for the specific mess you’re trying to clean up.
Here are the best AI models of 2025 categorised based on what they do:
3,000,000+ Systems Tested and 5,700 + CPU Models PassMark Software has delved into the millions of benchmark results that PerformanceTest users have posted to its web site and produced a comprehensive range of CPU charts to help compare the relative speeds of different processors from Intel, AMD, Apple, Qualcomm and others.
Included in these lists are CPUs designed for servers and workstations (such as Intel Xeon and AMD EPYC processors), desktop CPUs (Intel Core Series and AMD Ryzen), in addition to ARM processors (Apple M1 and Qualcomm Snapdragon) and mobile CPUs.
This chart made up of millions of PerformanceTest benchmark results and is updated daily with new graphics card benchmarks. This high end chart contains high performance video cards typically found in premium gaming PCs. Recently introduced AMD video cards and nVidia graphics cards using the PCI-Express (or PCI-E) standard are common in our high end video card charts.
NVIDIA today announced the NVIDIA Nemotron™ 3 family of open models, data and libraries designed to power transparent, efficient and specialized agentic AI development across industries.
The Nemotron 3 models — with Nano, Super and Ultra sizes — introduce a breakthrough hybrid latent mixture-of-experts (MoE) architecture that helps developers build and deploy reliable multi-agent systems at scale.
As organizations shift from single-model chatbots to collaborative multi-agent AI systems, developers face mounting challenges, including communication overhead, context drift and high inference costs. In addition, developers require transparency to trust the models that will automate their complex workflows. Nemotron 3 directly addresses these challenges, delivering the performance and openness customers need to build specialized, agentic AI.
“Open innovation is the foundation of AI progress,” said Jensen Huang, founder and CEO of NVIDIA. “With Nemotron, we’re transforming advanced AI into an open platform that gives developers the transparency and efficiency they need to build agentic systems at scale.”
NVIDIA Nemotron supports NVIDIA’s broader sovereign AI efforts, with organizations from Europe to South Korea adopting open, transparent and efficient models that allow them to build AI systems aligned to their own data, regulations and values.
Early adopters, including Accenture, Cadence, CrowdStrike, Cursor, Deloitte, EY, Oracle Cloud Infrastructure, Palantir, Perplexity, ServiceNow, Siemens, Synopsys and Zoom, are integrating models from the Nemotron family to power AI workflows across manufacturing, cybersecurity, software development, media, communications and other industries.
“NVIDIA and ServiceNow have been shaping the future of AI for years, and the best is yet to come,” Bill McDermott, chairman and CEO of ServiceNow. “Today, we’re taking a major step forward in empowering leaders across all industries to fast-track their agentic AI strategy. ServiceNow’s intelligent workflow automation combined with NVIDIA Nemotron 3 will continue to define the standard with unmatched efficiency, speed and accuracy.”
As multi-agent AI systems expand, developers are increasingly relying on proprietary models for state-of-the-art reasoning while using more efficient and customizable open models to drive down costs. Routing tasks between frontier-level models and Nemotron in a single workflow gives agents the most intelligence while optimizing tokenomics.
Here you will quickly learn all about local LLM hardware, software & models to try out first. There are many reasons why one might try to get into local large language models. One is wanting to own a local and fully private, personal AI assistant. Another is a need for a capable roleplay companion or story writing helper. Whatever your goal is, this guide will walk you through the basics of local LLMs including hardware requirements, inference software options, and lightweight models to start with. Enjoy!
The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology.
Almost every technological innovation of the past several years has been laser-focused on one thing: generative AI. Many of these supposedly revolutionary systems run on big, expensive servers in a data center somewhere, but at the same time, chipmakers are crowing about the power of the neural processing units (NPU) they have brought to consumer devices. Every few months, it’s the same thing: This new NPU is 30 or 40 percent faster than the last one. That’s supposed to let you do something important, but no one really gets around to explaining what that is.
Experts envision a future of secure, personal AI tools with on-device intelligence, but does that match the reality of the AI boom? AI on the “edge” sounds great, but almost every AI tool of consequence is running in the cloud. So what’s that chip in your phone even doing?
What is an NPU?
Companies launching a new product often get bogged down in superlatives and vague marketing speak, so they do a poor job of explaining technical details. It’s not clear to most people buying a phone why they need the hardware to run AI workloads, and the supposed benefits are largely theoretical.
Many of today’s flagship consumer processors are systems-on-a-chip (SoC) because they incorporate multiple computing elements—like CPU cores, GPUs, and imaging controllers—on a single piece of silicon. This is true of mobile parts like Qualcomm’s Snapdragon or Google’s Tensor, as well as PC components like the Intel Core Ultra.
SAN FRANCISCO (AP) — OpenAI CEO Sam Altman has set off a “code red” alert to employees to improve its flagship product, ChatGPT, and delay other product developments, according to The Wall Street Journal.
The newspaper reported that Altman sent an internal memo to staff Monday saying more work was needed to enhance the artificial intelligence chatbot’s speed, reliability and personalization features.
This week marks three years since OpenAI first released ChatGPT, sparking global fascination and a commercial boom in generative AI technology and giving the San Francisco-based startup an early lead. But the company faces increased competition with rivals, including Google, which last month unleashed Gemini 3, the latest version of its own AI assistant.
Micron is retiring the Crucial brand, marking the end of its line of budget-friendly solid-state drives (SSDs) and RAM kits, as reported earlier by VideoCardz. In an announcement on Wednesday, Micron says winding down its consumer-focused business will “improve supply and support for our larger, strategic customers in faster-growing segments” — a.k.a. AI companies.
Nike’s new “neuroscience-based footwear” is designed to activate an athlete’s brain before and after a big game. The two shoes, a mule (the $95 Mind 001) and a lace-up sneaker (the $145 Mind 002), feature a distinctive array of 22 orange foam nodes embedded in each sole. Nike says the nodes each move up and down independently, like “pistons and gimbals,” as the athlete walks, mimicking the feeling of walking on the ground in a way that is “scientifically shown” to stimulate the foot and thus activate the brain’s sensory areas.
Kiwix is an offline reader for online content like Wikipedia, Project Gutenberg, or TED Talks. It makes knowledge available to people with no or limited internet access. The software as well as the content is free to use for anyone.
TRANSFER YOUR PLAYLISTS AND FAVORITES The most reliable and fast solution to recreate your music collection across music services.
This is cool: Internet-in-a-Box. “Up to 32 users who are within about 100m of the hotspot can connect to the device and access or download the content that exists on the device: Wikipedia slices, medical knowledge, videos, and books.”
Fun & simple little browser game: Dodge This. “Move to dodge the bullets. How long can you survive?”
Parachute is a set-and-forget backup companion for iCloud Photos and iCloud Drive. It automatically syncs your memories—photos, videos, and documents—to your own storage, giving you peace of mind and full control.
Liu recommends that students use generative AI to write literature reviews, draft abstracts, generate charts, and organize thoughts. She’s created slides that lay out detailed examples of good and bad prompts, along with one core principle: AI can’t replace human judgment. “Only high-quality input and smart prompting can lead to good results,” she says.
Here’s a small demo game built with Phaser, used to test how Phaser games can be embedded directly in blog posts.
In this simple game, you click the cirlce and it shows how fast your response time is. No bells and whistles — just a quick test to prove the concept.
Go ahead and give it a try! It opens in a new tab for a cleaner, full-page experience.
Here’s a small demo game built with Phaser, used to test how Phaser games can be embedded directly in blog posts.
In this simple game, you control a car using only the left and right arrow keys. The goal? Navigate to the end of the road without crashing. No bells and whistles — just a quick test to prove the concept.
Go ahead and give it a try! It opens in a new tab for a cleaner, full-page experience.
Q: With Mozilla announcing the shutdown of Pocket, many users are left wondering where to turn. Are there any competitors stepping in to fill the gap?
A: Yes, absolutely. A number of tools are emerging or growing in popularity as Pocket winds down. Notable alternatives include Instapaper, Raindrop.io, Omnivore, Matter, and Readwise Reader. Each of these services offers slightly different takes on the "read-it-later" model—some focusing on minimalism, others on tagging, highlighting, or integration with reading workflows.
Q: Does Pinboard still play a role in this ecosystem?
A: Pinboard is more of an archival bookmarking service than a full-fledged reader, but it certainly still serves a dedicated audience. It's known for its simplicity, speed, and long-term data retention, but lacks a modern interface and advanced parsing features. You won’t get the same kind of clean reading experience that Pocket or Instapaper users expect.
Q: Are there any services that combine features like saving bookmarks, full-article parsing, and providing an RSS feed of saved content?
A: Yes. Omnivore, Raindrop.io, and Readwise Reader stand out in this area. Many of them allow you to save content via browser extensions or email, and then expose that saved content via RSS so you can read it in clients like NetNewsWire. Readwise Reader goes a step further by integrating highlighting, annotation, and syncing to Readwise's spaced repetition system.
Q: How does the reading experience compare across services?
A: Services like Readwise Reader, Instapaper, and Matter shine when it comes to readability. They don’t just bookmark a link; they scrape the content, extract the main article body, and present it in a clean, distraction-free format. NetNewsWire, as a traditional RSS reader, does a good job with full-text feeds, but doesn’t parse or clean articles itself.
Q: So these platforms essentially bypass the ads and junk of most websites?
A: Exactly. They pull just the main content and ignore ads, pop-ups, sidebars, and other clutter. This is a massive win for readers, though obviously not so great for publishers who rely on ad revenue or subscriptions.
Q: Speaking of subscriptions, how do these tools work with paywalled sites like the New York Times?
A: It varies. Some services can fetch content behind soft paywalls, but hard paywalls—like those requiring logins—are tougher. Readwise Reader, for example, won’t access subscriber-only content unless it’s publicly available or lightly restricted. Some tools allow you to paste in article text manually if needed.
Q: Are any readers trying to log in on behalf of the user to fetch restricted content?
A: Not widely, at least not yet. Logging in on behalf of a user and scraping subscription content introduces legal and ethical complexities. But we might see more sophisticated options in the future that offer secure credential management for premium content access.
Q: Do you think this will change with the rise of agentic AI?
A: Absolutely. The future points toward agentic AI systems that act on behalf of users—fetching, parsing, and even summarizing or annotating reading material proactively. Imagine a system that knows your interests, monitors your preferred sites or feeds, logs in when necessary, and delivers relevant, cleaned content to you daily. That's where we're heading, and it’s going to fundamentally reshape how we consume written information.
As tools like Pocket fade into history, a new era of intelligent, agent-driven readers is emerging. These systems don’t just store links; they work on the user’s behalf—retrieving, formatting, and delivering the written word in the cleanest, most accessible way possible. The read-it-later experience is rapidly evolving from a simple bookmarking function into a sophisticated, AI-enhanced reading concierge. And as agentic AI becomes more capable, the future of information consumption looks more streamlined.
Simple Notifications
Pushover makes it easy to get real-time notifications on your Android, iPhone, iPad, and Desktop (Android Wear and Apple Watch, too!)
Powered by Pushover
With our Android, iPhone & iPad, and Desktop Browser clients, you can receive unlimited push notifications on all of your devices from dozens of websites, services, and applications that already integrate with Pushover. Just supply your Pushover User Key or your Pushover e-mail address and you'll be getting push notifications in an instant.
Pushover for Teams
Pushover for Teams is a monthly service offering for organizations sending messages to multiple users and includes a number of extra features such as user management and automated onboarding. Pricing is per month, per user, and more information can be found on our Teams page.
Pushover for Everyone
Individuals and organizations not needing our Team features can use Pushover for Android, iOS, and Desktop with no subscription and just a simple one-time in-app purchase on each platform where you need it, after a 30-day free trial.
Simple Integration
For developers, system administrators, and everyone with just some technical savvy, our API makes it easy to integrate Pushover into your web app, network monitor, shell script, and anything else you can think of to send notifications to yourself or thousands of users. Pushing messages is as easy as using the HTTP libraries available in nearly every programming language with no custom modules required.
Send yourself native notifications from your apps and servers. Free to try, $5/month for unlimited.