Forgot your password?
typodupeerror
Hardware

Ask Slashdot: How Many (Electronics) Gates Is That Software Algorithm? 365

Posted by Soulskill
from the somewhere-between-zero-and-lots dept.
dryriver writes "We have developed a graphics algorithm that got an electronics manufacturer interested in turning it into hardware. Here comes the problematic bit... The electronics manufacturer asked us to describe how complex the algorithm is. More specifically, we were asked 'How many (logic) gates would be needed to turn your software algorithm into hardware?' This threw us a bit, since none of us have done electronics design before. So here is the question: Is there a piece of software or another tool that can analyze an algorithm written in C/C++ and estimate how many gates would be needed to turn it into hardware? Or, perhaps, there is a more manual method of converting code lines to gates? Maybe an operation like 'Add' would require 3 gates while an operation like 'Divide' would need 6 gates? Something along those lines, anyway. To state the question one more time: How do we get from a software algorithm that is N lines long and executes X number of total operations overall, to a rough estimate of how many gates this algorithm would use when translated into electronic hardware?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Many (Electronics) Gates Is That Software Algorithm?

Comments Filter:
  • Holy crap (Score:5, Insightful)

    by CajunArson (465943) on Wednesday January 08, 2014 @04:35PM (#45901087) Journal

    Either implement it as shaders for a GPU (or a DSP) or hire somebody who actually knows about hardware design if you are hell-bent on implementing an ASIC.

    Slashdot: Where *not* to go to get specific advice about specific technical issues.

  • by Anonymous Coward on Wednesday January 08, 2014 @04:36PM (#45901101)

    You'd think the "electronics manufacturer" would have some idea how to estimate this.

  • by Anonymous Coward on Wednesday January 08, 2014 @04:40PM (#45901147)

    I think you may have a better chance of getting an answer if you ask this question on Stackoverflow (or one of its related sites).

    Unfortunately, I think asking on Slashdot is only likely to get you some tired and outdated memes / jokes...

  • by pavon (30274) on Wednesday January 08, 2014 @04:40PM (#45901151)

    If they plan on implementing this in hardware, then they should have people who are capable of answering that question. If instead, they are just a manufacturer and aren't capable of doing the actual hardware design, then you have bigger problems than answering this question. That is something you should find out about ASAP.

  • by dskoll (99328) on Wednesday January 08, 2014 @04:42PM (#45901167)

    The question "How many gates does it take to implement this algorithm?" is stupid. It's like asking "How long is a piece of string?"

    There will always be a time/space tradeoff, even with translating an algorithm to hardware. You can save time by throwing more gates at the problem to increase parallelism, or you can save space by reusing gates in sequential operations.

  • by janeuner (815461) on Wednesday January 08, 2014 @04:47PM (#45901231)

    They do have a way. They asked if it had already been determined.

    The correct response is, "We don't know."

  • by raymorris (2726007) on Wednesday January 08, 2014 @05:01PM (#45901359)

    Clearly it's not possible to render a software program as hardware. If everyone who explained the process (use Verilog) above is correct, that would mean that the exact same algorithm exists as both hardware and software.

    We can't have the same algorithm exist as both hardware and software, because that would mean algorithms are hardware just as much as they are software.
      that would mean all the people whining about "software patents" may as well be whining about unicorns. I hereby declare Verilog, ASICs, and FPGAs to be non-existent so we can continue to pretend that there is such a thing as a "software patent".

  • by swm (171547) * <swmcd@world.std.com> on Wednesday January 08, 2014 @05:03PM (#45901371) Homepage

    You already have your algorithm running in electronic hardware, right?
    Your current gate count is the sum of
      * the gate count of your CPU
      * the gate count of your RAM
      * the gate count of your program ROM

    So that's an upper bound on the gate count.
    If that number is too big for your manufacturing partner,
    then you have an optimization problem.

    Optimization is a hard problem...

  • by ttucker (2884057) on Wednesday January 08, 2014 @05:16PM (#45901475)
    Do not ask a computer scientist to be an electrical engineer.
  • Re:Holy crap (Score:5, Insightful)

    by Joce640k (829181) on Wednesday January 08, 2014 @05:48PM (#45901747) Homepage

    Just 'fess up and say "We don't know, we're software people, not hardware people".

    If it's really important they might offer some help.

  • by SplawnDarts (1405209) on Wednesday January 08, 2014 @06:03PM (#45901893)

    Knowing what algorithm you want to run in hardware in not even close to enough to estimate gates. You need to know the algorithm, and the required performance, and have a sketched out HW design that meets those goals. THEN you can estimate gate count.

    For a simple example of why this is, consider processors. A 386 and a Sandy Bridge i7 implement very similar "algorithms" - it's just fetch->decode->execute->writeback all day long. If you implemented them in software emulation, it would be very similar software with some additional bits for the newer ISA features on the i7. But a 386 is about 280 THOUSAND gates, and the i7 is about 350 MILLION gates/core - three orders of magnitude different. Of course, there's at least a 2 order of magnitude performance difference too - it's not like those gates are going to waste.

    Point is, knowing the algorithm isn't enough to get even a finger in the wind guess at gate count. If you need an answer to this question, you need to get competent HW design people looking at it.

  • Re:Holy crap (Score:5, Insightful)

    by Goaway (82658) on Wednesday January 08, 2014 @06:05PM (#45901917) Homepage

    This is the only sane answer. They probably only asked to find out if you happened to know.

    Say you don't know, and let them look at the code to figure it out.

  • Re: Verilog (Score:5, Insightful)

    by harrkev (623093) <kfmsd@@@harrelsonfamily...org> on Wednesday January 08, 2014 @06:48PM (#45902223) Homepage

    I still must disagree. Yes, the syntax is somewhat like C. However, WHAT you are coding is completely different. In particular, things that C and do with a simple "if" statement are not allowed at all in proper gate design. It is not hard to imagine a software guy coding latches all over the place, assigning the same signals from withing different always blocks, etc. Even "always @(posedge clock)" may be a fundamental paradigm shift for a software guy. And not to mention the rather arbitrary way that Verilog treats wire vs. reg.

    wire a = b & c;

    wire a;
    assign a = b & c;

    reg a
    always @(*) a = b & c;

    These three constructs do the same thing. Why is one "wire" and one "reg"?

    What is the difference between the two blocks (they are NOT the same - blocking vs. non-blocking)?

    always @(posedge clk) begin
        a = b;
        c = a & b;
    end

    always @(posedge clk) begin
        a = b;
        c = a & b;
    end

    What about race conditions? Glitches on combinatorial logic? Proper coding of state machines? Need memory? How do you drop in an encrypted 3rd party DDR controller and PHY? Interface with AHB bus? In a given process, how many levels are logic are reasonable for a given clock speed? What exactly are hold violations?

    I am not saying that any of these are insurmountable. What I am saying is that a good digital designer is worth paying for, and a software guy may have a very steep learning curve indeed.

  • Re:Holy crap (Score:0, Insightful)

    by Anonymous Coward on Wednesday January 08, 2014 @06:50PM (#45902235)

    This is the only sane answer. They probably only asked to find out if you happened to know.

    Say you don't know, and let them look at the code to figure it out.

    Anybody who holds an actual 4-year CS degree should be able to do at least a rudimentary analysis of this type.
    Anybody who holds an actual 4-year "triple E" degree should be able to do a full blown analysis.

    Obviously neither company has any employees with either of those two degrees working for them. Or if they do, they really suck.
    Since submitter admits they are a software only house, my personal advice is to go find a different company to handle the hardware.

  • by geoskd (321194) on Wednesday January 08, 2014 @08:18PM (#45903029)

    Except ... Wow. An early course in my computer science curriculum was: 201. Computer Logic Design I (3) Prerequisite: MATH 113 or equivalent all with a grade of "C" or better. Basic topics in combinational and sequential switching circuits with applications to the design of digital devices. Introduction to Electronic Design Automation (EDA) tools. Laboratory projects with Field Programmable Gate Arrays (FPGA). (Lecture 2 hours, lab 3 hours) Letter grade only (A-F). (We used Verilog and a Xilinx FPGA board.) I'm surprised a reputable CS degree wouldn't require at least a basic course in digital logic; Cal State Long Beach is a great school, but it's certainly not a standards bearer...

    There is a world of difference between an entry level college course on ASIC/FPGA design, and actually being able to do the job. Just because you can design and synthesize a projct with a few hundred gates in it does not mean you are even remotely prepared to know where to begin a project with 10^6+ gates in it. More impotantly, high level software languages allow for indescriminant serial loops which are massively difficult to deal with in pure hardware. In short, the design methodology is completely different if you are trying to build for a software path, or a hardware path. You need someone with a hardware mindset to take your algorithm back to scratch and start over. Even knowing the HDLs is not good enough, as it is relatively trivial to write "valid" VHDL or Verlilog code that cant be synthesized...

Man is the best computer we can put aboard a spacecraft ... and the only one that can be mass produced with unskilled labor. -- Wernher von Braun

Working...