Comments Locked

20 Comments

Back to Article

  • Samus - Monday, September 19, 2016 - link

    I don't understand why you would want an R series chip over an A series chip when the cost of licensing an A series is already pennies on the dollar. This being targeted at mission critical "safety" applications indicates the target market isn't really the penny pinching crowd and will likely desire the more available, more standard, and more sophisticated cortex A53/A57, which is ridiculously cheap to license for single core SoC's.
  • Ryan Smith - Monday, September 19, 2016 - link

    The short answer is that the A series CPUs are optimized for throughput and total performance. They lack the determinism and real-time guarantees that the R series provides; the time it takes for an instruction to complete is too variable. With the R5/52 series, you know exactly how long something is going to take, and the order it completes in. Which makes state validation a heck of a lot easier.
  • Samus - Tuesday, September 20, 2016 - link

    Interesting. I just figured for the simple applications R is targeted at, a high clocked A series core would be more than up to the task to guarantee availability. Can't the pipeline be optimized for real time availability with a linear kernel scheduler?
  • Qwertilot - Tuesday, September 20, 2016 - link

    There's a very big difference between something 'probably' being enough and being *sure* that it is :)
  • nightbringer57 - Tuesday, September 20, 2016 - link

    There are lots of things in a modern, fast processor that are good, but prevent you from ensuring a 100% probability that the tasks will be dealt with on time.

    This is about the maximum latency for one run of the task.
    Say your application processor can run a task 1000 times/second. With caches, out-of-order execution, and all modern, sophisticated hardware mechanisms, you can achieve this. The issue is that you're going to have wildly varying latencies between the start and the end of the task's run. Even if you're going to run it at an average 1ms latency, it may very well end up having a latency between 0.1 ms and 10ms.

    In a real-time processor, which is usually simpler, and deterministic, you may not be able to run the same task more than 400 times a second. But you can actually make sure that this run will never last for more than 3ms.

    In such a system, you can actually mathematically prove that you will never get a task "overload": you know that the task will never last more than 3 ms, and by structuring your software in a rigorous way, you can make sure that all tasks will always be performed in a known, and sufficient, timespan. Which is pretty useful for safety-related stuff.
  • ravyne - Tuesday, September 20, 2016 - link

    The target application for these types of processors are mostly safety critical (things crash or people die if it goes wrong) or timing critical (exact -- not just precise -- response time guaranteed or output is corrupted, often rapid but not always so). Things like controllers for industrial robotics, flight controls, your car's airbags, are good examples where even the smallest chance of non-determinism causing a system fault or too slow of a response is too much disaster to risk. On the timing-critical side, things like hard drive controllers are good examples of where the R architecture has a strong presence -- hard disk tracks are only getting tighter, and your data would be terribly corrupted if ever your drive's firmware made a timing mistake.
  • rgulde - Thursday, August 25, 2022 - link

    Question I have is related to hypervisor and core link interconnect - I presume the SoC has a resource domain controller to favor R52 peripheral access.
    --Does this also mean by configuration that the core link interconnect (AXI or AMBRA bus, AHB) can not bandwidth limit the R52 core?
    --How about faults on the A side - is reload of code limited to A side? Do they load code independently?
    --Does the R52 have a lower priority task to communicate data to be processed via some sort of shared memory? E.G. Offshoring AI Neural Nets to the A side core for analysis - thinking autonomous vehicle, or control feedback from main A side integrator of many flight controls and inertial navigation (optical gyros) running at perhaps a slower rate.
  • michael2k - Monday, September 19, 2016 - link

    Redundancy is free with R.
  • ddriver - Monday, September 19, 2016 - link

    A is NOT realtime. Even with a RT kernel it doesn't come anywhere close to R for real time.
  • extide - Tuesday, September 20, 2016 - link

    Pretty much all hard drives and most SSD controllers are based on Cortex R series chips, BTW.
  • Raqia - Monday, September 19, 2016 - link

    Does this CPU even have caches? What's its closest A-X equivalent in terms of architecture and performance?
  • nightbringer57 - Tuesday, September 20, 2016 - link

    Cache typically is a PITA to handle in real time applications.

    This cannot really be compared to application processors, real-time processors typically sacrifice a lot of actual processing power in order to ensure a deterministic behaviour.
  • Tom Womack - Tuesday, September 20, 2016 - link

    Yes, it has caches (see slide 17).
  • extide - Wednesday, September 21, 2016 - link

    It's right in the article A7/A53
  • lilmoe - Monday, September 19, 2016 - link

    A53 successor is way overdue, ARM...
  • extide - Tuesday, September 20, 2016 - link

    A35 ..?
  • Nenad - Tuesday, September 20, 2016 - link

    How much slower is R compared to A ?

    I understand that determinism is important benefit with R, but if A is order of magnitudes faster, is it possible that *worst* case for A would still be guaranteed to be faster than R ?

    And if that is the case , would A be good or better than R even for RT systems (provided you run RT OS on top of A) ?
  • allajunaki - Tuesday, September 20, 2016 - link

    Realtime, for example is how your Car ABS / Traction Control operates. If they miss the processing window, they will skip the event and move on to the next one. Time critical operations follow this method. What R series would allow you would be processing of this sort. Realtime operations require RTOS and a processor that supports Realtime operations. I would imagine these processors will also prioritise low latency over raw throughput. (So no Out Of Order Execution, cache, or anything that can potentially introduce latency).
  • MrCommunistGen - Tuesday, September 20, 2016 - link

    Samsung has previously used ARM R4 cores in their SATA SSD controllers (MDX thru MHX). Wonder if R52 is slated to go into some future NVMe SSD controller!
  • Anato - Wednesday, September 21, 2016 - link

    I doubt safety-critical certifications will allow non-certified code to be run in same processor at will. And I hope we don't need new pile of corpses to prove this. The savings aren't that magnificent that we should mix “random code” with safety-critical task and pray silicon is implemented correctly and doesn't have bugs.

Log in

Don't have an account? Sign up now