OnePlus Open could outfold the Galaxy Z Fold 5

OnePlus Open Leaked Renders Based On Pre Production Unit 2

Credit: MySmartPrix
  • OnePlus Open is said to be tested for 400,000 folds by the company.
  • The foldable phone might end up being equally or more durable than the Samsung Galaxy Z Fold 5.

OnePlus Open seems to be inching closer to launch if the increasing frequency of leaks about the device is anything to go by. The latest information about the phone comes courtesy of Max Jambor, who posted about the foldable phone on X (formerly Twitter).

How to check data usage on your Android device

Blazing 5G network speeds and the proliferation of video content means we’re burning through mobile data on our devices faster than ever. This can be problematic if you’re on a limited data plan that charges you more for going over your allocation, or even more so if you’re abroad and paying by the gigabyte. Even on unlimited data plans, your speeds can sometimes get severely throttled if you exceed a certain usage threshold. For these reasons, it’s important to know how to check data usage on your Android device — and to learn how to manage it.

We’ll run you through everything you need to know in this quick guide. You’re on the wrong page if you’re an iOS user, but we also have a guide to checking mobile data usage on iPhone.

Decoding the Naming Conventions of Ransomware Malware

In the ever-evolving landscape of cyber threats, one form of digital menace has gained significant notoriety: ransomware malware. These malicious programs encrypt victims’ data and demand a ransom for its release, wreaking havoc on individuals, businesses, and even government institutions. One intriguing aspect of ransomware is the distinct and often creative names these threats are given. Delving into the process of naming ransomware malware provides insights into the psychology of cybercriminals and their intentions.

The Art of Naming Ransomware

Ransomware developers tend to name their creations with an assortment of motives in mind. Some aim for attention-grabbing names to garner media coverage, while others prefer obscure monikers that fly under the radar, allowing them to carry out attacks unnoticed. The naming process is akin to branding for cybercriminals, with the chosen name serving as a tool to strike fear, assert dominance, or even make political or social statements.

Themes and Inspiration

Ransomware names often draw inspiration from a variety of sources, including pop culture, literature, mythology, and even technology itself. The notorious “WannaCry” ransomware, for instance, gained global attention due to its infective speed and destructive impact, while its name seemed to allude to the plea victims might utter when faced with their encrypted files. Similarly, names like “Locky,” “GandCrab,” and “Ryuk” infuse a sense of personality and character into the malware, adding an unsettling layer to their destructive nature.

The Psychological Impact

The names of ransomware malware are carefully chosen to instill fear, uncertainty, and a sense of urgency in victims. Cybercriminals leverage psychological tactics to pressure victims into paying the demanded ransom quickly. By giving their malware ominous or evocative names, hackers aim to manipulate the emotional state of those affected, increasing the likelihood of compliance.

Linguistic Considerations

In some cases, ransomware developers consider linguistic factors to ensure their creations have a global impact. They may select names that are easy to remember and pronounce across different languages and cultures, maximizing the reach of their threat campaigns. This linguistic adaptability further underscores the deliberate strategy behind the naming process.

The Role of Cybersecurity Researchers

The cybersecurity community plays a crucial role in identifying and combating ransomware threats. Security experts often assign their own names or labels to ransomware variants to aid in communication and analysis. These names are typically less sensational and more descriptive, focusing on technical attributes or specific characteristics of the malware. This approach allows researchers to efficiently categorize and track ransomware strains.

Conclusion

The naming of ransomware malware is a multifaceted phenomenon that offers a glimpse into the complex world of cybercrime. From attention-seeking to psychological manipulation, the names chosen for these malicious programs reveal the intentions and strategies of cybercriminals. As the battle against ransomware continues, understanding the significance of these names becomes increasingly important for both cybersecurity professionals and the general public alike.

The post Decoding the Naming Conventions of Ransomware Malware appeared first on Cybersecurity Insiders.

The Latest in Cybersecurity Incidents making to Google Headlines

Collaborative Efforts Dismantle Qakbot Malware’s IT Infrastructure

In a significant joint operation, the FBI, in partnership with the Department of Justice and international allies, has successfully taken down the IT infrastructure owned by the Qakbot Malware group. Drawing expertise from cyber law enforcement units in countries including France, the USA, Germany, the Netherlands, Romania, Latvia, and the UK, a coordinated cyber attack was launched against the botnet infrastructure. This operation aimed to disrupt the malicious activities carried out by cybercriminals using Qakbot, including ransomware distribution, DDoS attacks, financial fraud, and various forms of social engineering.

The collaborative effort yielded positive results, with law enforcement agencies managing to infiltrate the Qakbot infrastructure. Their efforts unveiled a staggering 700,000 infected computers worldwide, all harboring the Qakbot malware. Particularly concerning was the identification of over 200,000 infected computers within the United States alone.

University of Michigan’s Network Disrupted Due to Suspicious Activity

In a recent cybersecurity development, the University of Michigan has taken the precautionary step of severing network connections for its students and staff since August 27, 2023. The decision came in response to the detection of suspicious activities within the university’s computer network across its campuses.

The university’s IT teams are working tirelessly to rectify the situation and restore network services as swiftly as possible. While the restoration process is underway, the administration has granted temporary permission for students and staff to access certain applications such as Zoom, Adobe, Dropbox, Slack, Google, and Canva from external networks using school devices.

Hospital Sisters Health System Takes Protective Measures Against Network Malware

Hospital Sisters Health System (HSHS) has taken a proactive stance in the face of a potential network malware infection. Over the past two days, the healthcare provider has opted to shut down its computer network to contain any potential threats and safeguard its clinical and administrative applications.

HSHS has released a statement regarding the temporary shutdown, outlining the suspension of services such as MyChart Communications. This platform is typically used by patients to manage appointments, view test results, access medical history, and make payments. The network will remain inactive until further notice, reflecting HSHS’s commitment to maintaining the integrity of patient data and healthcare operations.

The post The Latest in Cybersecurity Incidents making to Google Headlines appeared first on Cybersecurity Insiders.

Apple AirPods with USB-C could join the iPhone 15 series on Sep. 12

Apple Airpods Pro 2nd generation vs Apple Airpods 3rd generation hero case comparison

Credit: Austin Kwok / Android Authority
  • According to a trusted Apple insider, the company will launch USB-C AirPods alongside the new iPhone 15 lineup on September 12.
  • The AirPods Pro 2 earbuds are most likely to get the refresh.
  • Apple reportedly has no plans to refresh the non-pro AirPods with a USB-C charging case.

Apple has scheduled a special event for September 12 to launch the iPhone 15 series and new Apple Watches. The biggest change coming to the Apple phones this year is a switch from Lightning to USB-C ports, and it looks like the iPhones won’t be alone in this transition. Bloomberg’s Mark Gurman reports what’s been suspected for a while now — Apple will also refresh the AirPods with USB-C to compliment the iPhone 15 lineup.

ASUS confirms it’s NOT shutting down the Zenfone division

ASUS Zenfone 10 back in hand

Credit: Robert Triggs / Android Authority
  • ASUS is not shutting down the Zenfone division.
  • The company denied that the Zenfone 10 would be the last flagship in the line.
  • ASUS confirmed it’ll continue making smartphones under the ROG and Zenfone brands

ASUS has denied a previous report about the possible shutdown of the Zenfone division. The company issued a press statement, rubbishing claims from a Taiwanese media outlet that reported that the Zenfone 10 would be the last flagship in the Zenfone line.

watchOS 10: Release date, features, and what we hope to see

Apple Watch Series 8 logo

Credit: Kaitlyn Cimino / Android Authority

An updated software experience is headed to Apple’s smartwatches in the coming weeks, alongside the launch of the company’s Apple Watch Series 9. Apple provided a sneak peek of watchOS 10 at WWDC 2023 in early June and Apple Watch users have plenty to look forward to when it lands. Find out everything we expect and what we know about watchOS 10 features and expected release.

watchOS 10: At a glance 

The Pixel 8 camera app needs an overhaul, not meagre UI tweaks

google pixel 7 pro cameras close

Credit: Ryan Haines / Android Authority

Google’s Pixel phones have long had a reputation for delivering fantastic cameras, harking back to the original Pixel and even the Nexus 6P. But after using my Pixel 7 Pro for over nine months, I’ve realized that the Pixel camera app needs an overhaul.

No, I’m not talking about an overhaul per our recent Pixel 8 camera UI leak. Instead, I’m talking about broader additions that would bring the camera app in line with camera apps from the likes of Samsung, Xiaomi, and others.

I tried a PopSocket for the first time and once I popped, I couldn’t stop

popsocket popgrip for magsafe popped up

Credit: Rita El Khoury / Android Authority

I don’t like sticking permanent things to the back of my Android phone. Dealing with a thicker phone, an obstacle that creates friction each time I want to put it in my pocket or take it out, and an uneven and wobbly device every time I place it on a flat surface is just not worth the sacrifice for me. Or at least that’s what I thought until I got to test a couple of PopSockets.

On paper, these little round pucks are everything I hate attaching to my phone. But there’s a MagSafe version that promises to provide the same ergonomics while still being easily removable. A win-win solution, right? To my surprise, I’ve found myself getting used to the regular, sticky PopSocket more than the MagSafe one. Maybe it was the very fact that I couldn’t remove it that got me sold on it, or maybe I was too busy hopping around Czechia for 10 days of adventures that I didn’t give it a second thought. Suffice it to say, I kind of get why PopSockets are so popular now. And I may be a convert — or at least on the way to becoming one.

WatchOS 10 killed my favorite Apple Watch features

Opinion post by
Dhruv Bhutani

The latest watchOS 10 release is one of the most significant upgrades to the Apple Watch’s user interface ever. The upgrade principally revolves around a new widget stack that can be accessed by swiping up from the watch face. Users can then scroll through running activities or any other app they’ve pinned to the stack.

In anticipation of the Apple Watch 9, I’ve been testing the beta release of watchOS 10 for the last few weeks and can attest that the widgets speed up access to your favorite apps. However, these new additions come at the cost of two existing features I use almost daily. It goes without saying I’m pretty unhappy about it.

Microsoft Office Pro for just $34.97 is the ultimate back-to-school deal

You’re already spending enough on new hardware and materials when you’re heading back to school, so you don’t need a massive outlay or regular subscription for software eating into your budget. That’s what makes this Microsoft Office Pro 2021 deal such a gem. For a one-time payment of just $34.97, you get lifetime access to the ubiquitous software package.

Microsoft Office Professional 2021 for $34.97 ($185 off)

WhatsApp finally rolling out support for HD video today to Android and iOS

WhatsApp logo on smartphone next to other devices Stock photo 5

Credit: Edgar Cervantes / Android Authority
  • WhatsApp HD video support is rolling out now to Android and iOS.
  • This bumps video quality from the previous 480p limit to 720p.
  • You can still choose to send videos in the lower quality.

Last week, WhatsApp announced the support of HD photos was rolling out. This finally enabled users to send higher quality photos, should they so choose. During that announcement, WhatsApp said similar support for high-definition videos was coming soon.

Today, WhatsApp fulfilled that promise (via TechCrunch). Starting today, the latest version of the Android and iOS apps will support WhatsApp HD video.

Responsible AI at Google Research: Perception Fairness

Google’s Responsible AI research is built on a foundation of collaboration — between teams with diverse backgrounds and expertise, between researchers and product developers, and ultimately with the community at large. The Perception Fairness team drives progress by combining deep subject-matter expertise in both computer vision and machine learning (ML) fairness with direct connections to the researchers building the perception systems that power products across Google and beyond. Together, we are working to intentionally design our systems to be inclusive from the ground up, guided by Google’s AI Principles.

Perception Fairness research spans the design, development, and deployment of advanced multimodal models including the latest foundation and generative models powering Google’s products.

Our team’s mission is to advance the frontiers of fairness and inclusion in multimodal ML systems, especially related to foundation models and generative AI. This encompasses core technology components including classification, localization, captioning, retrieval, visual question answering, text-to-image or text-to-video generation, and generative image and video editing. We believe that fairness and inclusion can and should be top-line performance goals for these applications. Our research is focused on unlocking novel analyses and mitigations that enable us to proactively design for these objectives throughout the development cycle. We answer core questions, such as: How can we use ML to responsibly and faithfully model human perception of demographic, cultural, and social identities in order to promote fairness and inclusion? What kinds of system biases (e.g., underperforming on images of people with certain skin tones) can we measure and how can we use these metrics to design better algorithms? How can we build more inclusive algorithms and systems and react quickly when failures occur?

Measuring representation of people in media

ML systems that can edit, curate or create images or videos can affect anyone exposed to their outputs, shaping or reinforcing the beliefs of viewers around the world. Research to reduce representational harms, such as reinforcing stereotypes or denigrating or erasing groups of people, requires a deep understanding of both the content and the societal context. It hinges on how different observers perceive themselves, their communities, or how others are represented. There’s considerable debate in the field regarding which social categories should be studied with computational tools and how to do so responsibly. Our research focuses on working toward scalable solutions that are informed by sociology and social psychology, are aligned with human perception, embrace the subjective nature of the problem, and enable nuanced measurement and mitigation. One example is our research on differences in human perception and annotation of skin tone in images using the Monk Skin Tone scale.

Our tools are also used to study representation in large-scale content collections. Through our Media Understanding for Social Exploration (MUSE) project, we’ve partnered with academic researchers, nonprofit organizations, and major consumer brands to understand patterns in mainstream media and advertising content. We first published this work in 2017, with a co-authored study analyzing gender equity in Hollywood movies. Since then, we’ve increased the scale and depth of our analyses. In 2019, we released findings based on over 2.7 million YouTube advertisements. In the latest study, we examine representation across intersections of perceived gender presentation, perceived age, and skin tone in over twelve years of popular U.S. television shows. These studies provide insights for content creators and advertisers and further inform our own research.

An illustration (not actual data) of computational signals that can be analyzed at scale to reveal representational patterns in media collections. [Video Collection / Getty Images]

Moving forward, we’re expanding the ML fairness concepts on which we focus and the domains in which they are responsibly applied. Looking beyond photorealistic images of people, we are working to develop tools that model the representation of communities and cultures in illustrations, abstract depictions of humanoid characters, and even images with no people in them at all. Finally, we need to reason about not just who is depicted, but how they are portrayed — what narrative is communicated through the surrounding image content, the accompanying text, and the broader cultural context.

Analyzing bias properties of perceptual systems

Building advanced ML systems is complex, with multiple stakeholders informing various criteria that decide product behavior. Overall quality has historically been defined and measured using summary statistics (like overall accuracy) over a test dataset as a proxy for user experience. But not all users experience products in the same way.

Perception Fairness enables practical measurement of nuanced system behavior beyond summary statistics, and makes these metrics core to the system quality that directly informs product behaviors and launch decisions. This is often much harder than it seems. Distilling complex bias issues (e.g., disparities in performance across intersectional subgroups or instances of stereotype reinforcement) to a small number of metrics without losing important nuance is extremely challenging. Another challenge is balancing the interplay between fairness metrics and other product metrics (e.g., user satisfaction, accuracy, latency), which are often phrased as conflicting despite being compatible. It is common for researchers to describe their work as optimizing an “accuracy-fairness” tradeoff when in reality widespread user satisfaction is aligned with meeting fairness and inclusion objectives.

We built and released the MIAP dataset as part of Open Images, leveraging our research on perception of socially relevant concepts and detection of biased behavior in complex systems to create a resource that furthers ML fairness research in computer vision. Original photo credits — left: Boston Public Library; middle: jen robinson; right: Garin Fons; all used with permission under the CC- BY 2.0 license.

To these ends, our team focuses on two broad research directions. First, democratizing access to well-understood and widely-applicable fairness analysis tooling, engaging partner organizations in adopting them into product workflows, and informing leadership across the company in interpreting results. This work includes developing broad benchmarks, curating widely-useful high-quality test datasets and tooling centered around techniques such as sliced analysis and counterfactual testing — often building on the core representation signals work described earlier. Second, advancing novel approaches towards fairness analytics — including partnering with product efforts that may result in breakthrough findings or inform launch strategy.

Advancing AI responsibly

Our work does not stop with analyzing model behavior. Rather, we use this as a jumping-off point for identifying algorithmic improvements in collaboration with other researchers and engineers on product teams. Over the past year we’ve launched upgraded components that power Search and Memories features in Google Photos, leading to more consistent performance and drastically improving robustness through added layers that keep mistakes from cascading through the system. We are working on improving ranking algorithms in Google Images to diversify representation. We updated algorithms that may reinforce historical stereotypes, using additional signals responsibly, such that it’s more likely for everyone to see themselves reflected in Search results and find what they’re looking for.

This work naturally carries over to the world of generative AI, where models can create collections of images or videos seeded from image and text prompts and can answer questions about images and videos. We’re excited about the potential of these technologies to deliver new experiences to users and as tools to further our own research. To enable this, we’re collaborating across the research and responsible AI communities to develop guardrails that mitigate failure modes. We’re leveraging our tools for understanding representation to power scalable benchmarks that can be combined with human feedback, and investing in research from pre-training through deployment to steer the models to generate higher quality, more inclusive, and more controllable output. We want these models to inspire people, producing diverse outputs, translating concepts without relying on tropes or stereotypes, and providing consistent behaviors and responses across counterfactual variations of prompts.

Opportunities and ongoing work

Despite over a decade of focused work, the field of perception fairness technologies still seems like a nascent and fast-growing space, rife with opportunities for breakthrough techniques. We continue to see opportunities to contribute technical advances backed by interdisciplinary scholarship. The gap between what we can measure in images versus the underlying aspects of human identity and expression is large — closing this gap will require increasingly complex media analytics solutions. Data metrics that indicate true representation, situated in the appropriate context and heeding a diversity of viewpoints, remains an open challenge for us. Can we reach a point where we can reliably identify depictions of nuanced stereotypes, continually update them to reflect an ever-changing society, and discern situations in which they could be offensive? Algorithmic advances driven by human feedback point a promising path forward.

Recent focus on AI safety and ethics in the context of modern large model development has spurred new ways of thinking about measuring systemic biases. We are exploring multiple avenues to use these models — along with recent developments in concept-based explainability methods, causal inference methods, and cutting-edge UX research — to quantify and minimize undesired biased behaviors. We look forward to tackling the challenges ahead and developing technology that is built for everybody.

Acknowledgements

We would like to thank every member of the Perception Fairness team, and all of our collaborators.

How to compare a noisy quantum processor to a classical computer

A full-scale error-corrected quantum computer will be able to solve some problems that are impossible for classical computers, but building such a device is a huge endeavor. We are proud of the milestones that we have achieved toward a fully error-corrected quantum computer, but that large-scale computer is still some number of years away. Meanwhile, we are using our current noisy quantum processors as flexible platforms for quantum experiments.

In contrast to an error-corrected quantum computer, experiments in noisy quantum processors are currently limited to a few thousand quantum operations or gates, before noise degrades the quantum state. In 2019 we implemented a specific computational task called random circuit sampling on our quantum processor and showed for the first time that it outperformed state-of-the-art classical supercomputing.

Although they have not yet reached beyond-classical capabilities, we have also used our processors to observe novel physical phenomena, such as time crystals and Majorana edge modes, and have made new experimental discoveries, such as robust bound states of interacting photons and the noise-resilience of Majorana edge modes of Floquet evolutions.

We expect that even in this intermediate, noisy regime, we will find applications for the quantum processors in which useful quantum experiments can be performed much faster than can be calculated on classical supercomputers — we call these “computational applications” of the quantum processors. No one has yet demonstrated such a beyond-classical computational application. So as we aim to achieve this milestone, the question is: What is the best way to compare a quantum experiment run on such a quantum processor to the computational cost of a classical application?

We already know how to compare an error-corrected quantum algorithm to a classical algorithm. In that case, the field of computational complexity tells us that we can compare their respective computational costs — that is, the number of operations required to accomplish the task. But with our current experimental quantum processors, the situation is not so well defined.

In “Effective quantum volume, fidelity and computational cost of noisy quantum processing experiments”, we provide a framework for measuring the computational cost of a quantum experiment, introducing the experiment’s “effective quantum volume”, which is the number of quantum operations or gates that contribute to a measurement outcome. We apply this framework to evaluate the computational cost of three recent experiments: our random circuit sampling experiment, our experiment measuring quantities known as “out of time order correlators” (OTOCs), and a recent experiment on a Floquet evolution related to the Ising model. We are particularly excited about OTOCs because they provide a direct way to experimentally measure the effective quantum volume of a circuit (a sequence of quantum gates or operations), which is itself a computationally difficult task for a classical computer to estimate precisely. OTOCs are also important in nuclear magnetic resonance and electron spin resonance spectroscopy. Therefore, we believe that OTOC experiments are a promising candidate for a first-ever computational application of quantum processors.

Plot of computational cost and impact of some recent quantum experiments. While some (e.g., QC-QMC 2022) have had high impact and others (e.g., RCS 2023) have had high computational cost, none have yet been both useful and hard enough to be considered a “computational application.” We hypothesize that our future OTOC experiment could be the first to pass this threshold. Other experiments plotted are referenced in the text.

Random circuit sampling: Evaluating the computational cost of a noisy circuit

When it comes to running a quantum circuit on a noisy quantum processor, there are two competing considerations. On one hand, we aim to do something that is difficult to achieve classically. The computational cost — the number of operations required to accomplish the task on a classical computer — depends on the quantum circuit’s effective quantum volume: the larger the volume, the higher the computational cost, and the more a quantum processor can outperform a classical one.

But on the other hand, on a noisy processor, each quantum gate can introduce an error to the calculation. The more operations, the higher the error, and the lower the fidelity of the quantum circuit in measuring a quantity of interest. Under this consideration, we might prefer simpler circuits with a smaller effective volume, but these are easily simulated by classical computers. The balance of these competing considerations, which we want to maximize, is called the “computational resource”, shown below.

Graph of the tradeoff between quantum volume and noise in a quantum circuit, captured in a quantity called the “computational resource.” For a noisy quantum circuit, this will initially increase with the computational cost, but eventually, noise will overrun the circuit and cause it to decrease.

We can see how these competing considerations play out in a simple “hello world” program for quantum processors, known as random circuit sampling (RCS), which was the first demonstration of a quantum processor outperforming a classical computer. Any error in any gate is likely to make this experiment fail. Inevitably, this is a hard experiment to achieve with significant fidelity, and thus it also serves as a benchmark of system fidelity. But it also corresponds to the highest known computational cost achievable by a quantum processor. We recently reported the most powerful RCS experiment performed to date, with a low measured experimental fidelity of 1.7×10-3, and a high theoretical computational cost of ~1023. These quantum circuits had 700 two-qubit gates. We estimate that this experiment would take ~47 years to simulate in the world’s largest supercomputer. While this checks one of the two boxes needed for a computational application — it outperforms a classical supercomputer — it is not a particularly useful application per se.

OTOCs and Floquet evolution: The effective quantum volume of a local observable

There are many open questions in quantum many-body physics that are classically intractable, so running some of these experiments on our quantum processor has great potential. We typically think of these experiments a bit differently than we do the RCS experiment. Rather than measuring the quantum state of all qubits at the end of the experiment, we are usually concerned with more specific, local physical observables. Because not every operation in the circuit necessarily impacts the observable, a local observable’s effective quantum volume might be smaller than that of the full circuit needed to run the experiment.

We can understand this by applying the concept of a light cone from relativity, which determines which events in space-time can be causally connected: some events cannot possibly influence one another because information takes time to propagate between them. We say that two such events are outside their respective light cones. In a quantum experiment, we replace the light cone with something called a “butterfly cone,” where the growth of the cone is determined by the butterfly speed — the speed with which information spreads throughout the system. (This speed is characterized by measuring OTOCs, discussed later.) The effective quantum volume of a local observable is essentially the volume of the butterfly cone, including only the quantum operations that are causally connected to the observable. So, the faster information spreads in a system, the larger the effective volume and therefore the harder it is to simulate classically.

A depiction of the effective volume Veff of the gates contributing to the local observable B. A related quantity called the effective area Aeff is represented by the cross-section of the plane and the cone. The perimeter of the base corresponds to the front of information travel that moves with the butterfly velocity vB.

We apply this framework to a recent experiment implementing a so-called Floquet Ising model, a physical model related to the time crystal and Majorana experiments. From the data of this experiment, one can directly estimate an effective fidelity of 0.37 for the largest circuits. With the measured gate error rate of ~1%, this gives an estimated effective volume of ~100. This is much smaller than the light cone, which included two thousand gates on 127 qubits. So, the butterfly velocity of this experiment is quite small. Indeed, we argue that the effective volume covers only ~28 qubits, not 127, using numerical simulations that obtain a larger precision than the experiment. This small effective volume has also been corroborated with the OTOC technique. Although this was a deep circuit, the estimated computational cost is 5×1011, almost one trillion times less than the recent RCS experiment. Correspondingly, this experiment can be simulated in less than a second per data point on a single A100 GPU. So, while this is certainly a useful application, it does not fulfill the second requirement of a computational application: substantially outperforming a classical simulation.

Information scrambling experiments with OTOCs are a promising avenue for a computational application. OTOCs can tell us important physical information about a system, such as the butterfly velocity, which is critical for precisely measuring the effective quantum volume of a circuit. OTOC experiments with fast entangling gates offer a potential path for a first beyond-classical demonstration of a computational application with a quantum processor. Indeed, in our experiment from 2021 we achieved an effective fidelity of Feff ~ 0.06 with an experimental signal-to-noise ratio of ~1, corresponding to an effective volume of ~250 gates and a computational cost of 2×1012.

While these early OTOC experiments are not sufficiently complex to outperform classical simulations, there is a deep physical reason why OTOC experiments are good candidates for the first demonstration of a computational application. Most of the interesting quantum phenomena accessible to near-term quantum processors that are hard to simulate classically correspond to a quantum circuit exploring many, many quantum energy levels. Such evolutions are typically chaotic and standard time-order correlators (TOC) decay very quickly to a purely random average in this regime. There is no experimental signal left. This does not happen for OTOC measurements, which allows us to grow complexity at will, only limited by the error per gate. We anticipate that a reduction of the error rate by half would double the computational cost, pushing this experiment to the beyond-classical regime.

Conclusion

Using the effective quantum volume framework we have developed, we have determined the computational cost of our RCS and OTOC experiments, as well as a recent Floquet evolution experiment. While none of these meet the requirements yet for a computational application, we expect that with improved error rates, an OTOC experiment will be the first beyond-classical, useful application of a quantum processor.

-- Get the right stuff from a partner you trust. --

Partners

-- IT NEWS --

Blog

admin December 11th, 2025

Credit: Edgar Cervantes / Android Authority TL;DR Spotify is getting a new “Prompted Playlist” feature to help users create more […]

admin December 11th, 2025

TL;DR Four of the best Good Lock modules aren’t working properly on the One UI 8.5 beta. Home Up and […]

admin December 10th, 2025

This is an open thread. We want to hear from you! Share your thoughts in the comments and vote in […]