Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Ask Slashdot: How Many (Electronics) Gates Is That Software Algorithm? 365

dryriver writes "We have developed a graphics algorithm that got an electronics manufacturer interested in turning it into hardware. Here comes the problematic bit... The electronics manufacturer asked us to describe how complex the algorithm is. More specifically, we were asked 'How many (logic) gates would be needed to turn your software algorithm into hardware?' This threw us a bit, since none of us have done electronics design before. So here is the question: Is there a piece of software or another tool that can analyze an algorithm written in C/C++ and estimate how many gates would be needed to turn it into hardware? Or, perhaps, there is a more manual method of converting code lines to gates? Maybe an operation like 'Add' would require 3 gates while an operation like 'Divide' would need 6 gates? Something along those lines, anyway. To state the question one more time: How do we get from a software algorithm that is N lines long and executes X number of total operations overall, to a rough estimate of how many gates this algorithm would use when translated into electronic hardware?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Many (Electronics) Gates Is That Software Algorithm?

Comments Filter:
  • Holy crap (Score:5, Insightful)

    by CajunArson ( 465943 ) on Wednesday January 08, 2014 @04:35PM (#45901087) Journal

    Either implement it as shaders for a GPU (or a DSP) or hire somebody who actually knows about hardware design if you are hell-bent on implementing an ASIC.

    Slashdot: Where *not* to go to get specific advice about specific technical issues.

    • Re:Holy crap (Score:5, Insightful)

      by Joce640k ( 829181 ) on Wednesday January 08, 2014 @05:48PM (#45901747) Homepage

      Just 'fess up and say "We don't know, we're software people, not hardware people".

      If it's really important they might offer some help.

    • by fisted ( 2295862 )

      Doesn't sound to me like they were going to implement it themselves. But then again, you are frist post so you presumably didn't read TFS properly, in order to get your awsm frist post.

    • Re:Holy crap (Score:5, Informative)

      by Austerity Empowers ( 669817 ) on Thursday January 09, 2014 @12:10AM (#45904257)

      To give a more helpful, unhelpful answer, it's an ill-formed question. "How many gates" depends on the target on which you synthesize the hardware: a PCB, an FPGA, actual silicon (which fab? Which process? whose std cell library? what clock frequency?).

      If somehow the above could be narrowed down by asking the customer, then the next thing I'd advise is contracting someone who can write RTL using an HDL (verilog is most popular). The synthesizeable subset of HDL is tricky to learn for non-HW people, so unless you understand digital logic well I'd suggest finding someone else to do it for you. They can then synthesize it to the targeted device/platform. If you can do this, you should charge quite a lot of money since this form of IP is expensive, and they know it. If they're ok with that, you may also want to have this contractor also write the design verification suite, since this company will certainly want that to integrate into their own testing. Lots of contractors are out there for this due to the cyclic nature of this job, make sure you also have some support feature in place if you need them to fix/update the code later.

      Even simple software algorithms can be very big in HW, but some surpisingly complex SW algorithms are next to 1 liners in HW (like any form of bit masking or bit swizzling is free!). But generally if there are a lot of sequential steps, and those steps are different...it gets big. Also assume that for every 1 SW guy that wrote the code, you will need 1 RTL designer. If you take the verification step, it may be 1-2 verification engineers for 1 RTL, depending on your timeline.

      • Re:Holy crap (Score:5, Interesting)

        by fractoid ( 1076465 ) on Thursday January 09, 2014 @05:09AM (#45905041) Homepage
        It's also ill-formed (to the point of being almost meaningless) in the sense that the smallest number of gates for a given algorithm is probably going to be to implement some kind of low-end processor which then runs the algorithm as code.

        What they really wanted to ask was "what's the best price/performance option for executing this algorithm, given the following expected parameters and an initial production run size of X".
  • How many (Score:3, Funny)

    by Aighearach ( 97333 ) on Wednesday January 08, 2014 @04:35PM (#45901097)

    beowulf clusters does your algorithm desire?

  • Verilog (Score:5, Informative)

    by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Wednesday January 08, 2014 @04:35PM (#45901099) Homepage Journal
    If you learn to program in Verilog, you could try synthesizing for some FPGA and see how much space it takes up on the FPGA. But then programming for an FPGA differs from programming for a serial computer in that each line of code runs essentially as a separate thread, usually triggered on another signal (such as a clock) having a positive or negative edge.
    • Re:Verilog (Score:5, Interesting)

      by Anonymous Coward on Wednesday January 08, 2014 @04:46PM (#45901223)

      if you only need a estimation, use something like bamboo from PandA to convert your C Code to Verilog. Then synthesize this code for a FPGA. In the summery you should find how many logic cells would be used as well as how many digital gates in an asics are necessary. This value is only a estimation, but for your question, this should work.

      • Re:Verilog (Score:5, Informative)

        by ranulf ( 182665 ) on Wednesday January 08, 2014 @05:23PM (#45901541)

        The number of slices or logic cells or whatever else a particular synthesis program for a particular chip generates doesn't exactly correspond to a number of gates either. For instance, a single 4-in 1-out LUT on a Xilinx can be used for 1 gate or 6.

        I wouldn't have much confidence in automatic C to HDL conversion either. Good HDL design is about understanding the problem in terms of gates and parallelism. FPGAs and ASICs in general aren't particularly good at things that CPUs are good for, and inversely CPUs aren't especially good for things that FPGAs and ASICs can do well.

        The OP shows such a lack of understanding of hardware design that it's not funny! "Add = 3 gates, Divide = 6 gates" is quite comical to anyone who actually knows these things. A more ball park is that an n-bit add can be done with 2n LUTs, in terms of gates it's about 5n gates, but really that depends what gates you have available. A multiplier is massively more, dividing is even more complicated still. Fortunately, many FPGAs come with a few dedicator multipliers... Unless your algorithm requires only as many multipliers as you have available, you're probably best building a state machine and multiplexing a single multiplier unit, in much the same way as a CPU multiplexes the ALU at its core.

        The whole thing is massively dependent on algorithm and experience of the person doing the porting. The best advice is to say "I don't know" or to hire someone who does or suggest them running the algorithm on an embedded CPU.

        • Re:Verilog (Score:4, Informative)

          by SecurityTheatre ( 2427858 ) on Wednesday January 08, 2014 @06:01PM (#45901861)

          "Add = 3 gates, Divide = 6 gates" is quite comical to anyone who actually knows these things.

          Looking at an old reference I have, a 16-bit ripple-carry style adder requires 576 transistors, and a 16-bit carry-lookahead style adder (faster) requires 784 transistors.

          This is not including ANY control circuitry, nor a subtract feature.

          A pure-hardware 16-bit integer DIVIDE is between 15-30 times more complicated. To do it in pure hardware, would require on the order of 23,000 transistors.

          Unless you need your division to happen wicked fast with low latency and you don't care about transistor count, it's better to build add/shift hardware and simply perform a division operation using those bits of hardware repeatedly.

          Also, we're only doing 16-bit. If you need 64-bit, multiple all of those numbers by about 50 (spitballing).

          And converting from C into VHDL is probably not going to be the best way to go about this. Hire a decent hardware engineer.

          • I am confounded by your claim that a 16-bit hardware divide would take 24000 transistors. If nothing else, you should be able to cascade it into 4 4-bit lookups, and that would handle the job. And that would probably be overkill.

            Using shift-and-add would almost definitely seem to be better, especially since you could cue the operations. Although one 16-bit divide would then take about 120 clocks, 120 divides could take 240 clocks. (Look at me, I say clocks, I should say ops, and then let the clocks be w

          • I was misunderstanding my notes.

            You would need several thousand transistors for a standard DIV circuit, and then the CPU would need to iterate through the operation many times in order to perform a division.

            A single-cycle division circuit isn't practical, so it would involve building a state-machine and having the processor stall while doing the DIV calculation. The simple 1-bit circuit I was looking at would require a number of cycles equal to the number of bits input (16, 32, 64, etc), although they can

        • by kesuki ( 321456 )

          "A multiplier is massively more, dividing is even more complicated still."
          which is why you multiply by .5 to get division by 2. by 3 you need to multiply by .333334 depending on your precision. all possible divisions are a subset of multiplication from .999 infinite repeating to .000near infinite zeros followed by a 1. strange that something so 'easy' is harder than regular multiplication.

      • because if the hardware company is thinking "gates" instead of "cycles," they want to implement it in a FPGA. hell, if they were going to put it on a dedicated microprocessor, they'd just recast it with libraries for that processor and recompile.

    • Re:Verilog (Score:4, Interesting)

      by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Wednesday January 08, 2014 @05:03PM (#45901369) Homepage

      While there are some compilers that ATTEMPT to convert C/C++ into a hardware representation - These will usually fail unless you understand the target hardware.

      http://www.drdobbs.com/embedded-systems/c-for-fpgas/230800194 [drdobbs.com]

      One thing is: Even if you can successfully compile from C to Verilog or VHDL, there is no guarantee that the Verilog or VHDL will successfully synthesize on your target hardware.

      Even if it successfully synthesizes, there is no guarantee that it will be in any way an optimal implementation.

      Some C algorithms may never transfer well into a hardware implementation.

      • This.

        But there has been recent progress, and Xilinx is pushing hard to get people to compile C to gates with their Vivado HLS (guess the targets?).
        Worth having a look at, since you usually can get a 30-day eval license for FPGA tools.

      • Re: (Score:3, Informative)

        "Some C algorithms may never transfer well into a hardware implementation."

        This is a fundamentally silly thing to say.

        Hardware can be made to implement ANY functioning software. It might not be easy, but it is pretty much by definition possible. It's already running on hardware... it would be very rare indeed for it to not be possible to translate it into even more-efficient hardware, since the hardware it's running on now is general-purpose.

        • He didn't say "may not transfer at all", he said "may not transfer well". Also remember that an algorithm isn't just running on any old bit of hardware it's running on a modern CPU with lots of special instructions with a gigantic RAM attached to it and potentially some other peripherals for special functions. Hardware RNG, etc. It might very well not be reasonable to try to convert all this to a custom FPGA/ASIC for the cost involved.
        • by fisted ( 2295862 )

          It really isn't feasible for even moderately complex systems.
          Or you seem to be ignoring that most 'hardware' does pretty much nothing without .... software (i.e. firmware).

      • Yep. Although sub-optimal is like the understatement of the year. I've seen not just inefficient but inefficient by an order of magnitude at times.
    • Re:Verilog (Score:5, Informative)

      by harrkev ( 623093 ) <kevin.harrelson@ ... om minus painter> on Wednesday January 08, 2014 @05:50PM (#45901761) Homepage

      Seriously???? Asking a C++ programmer to begin to use Verilog is simply not practical. There is a VERY STEEP learning curve in trying to target real hardware. There is even a very different frame of mind that has to be learned in order to target gates.

      I speak from experience. I program Verilog and SystemVerilog for a living doing ASIC design.

      Now, to answer the OP:

      The answer is very strongly: it depends. The most optimistic answer is a couple hundred thousand. Implement an 8-bit CPU and write the thing in under 32K of code.

      On the other end of the spectrum is "many billions." Design your own x86 multi-core CPU, throw a couple of gigs of SRAM on the ASIC, tons of flash for a solid-state disc drive, and you will have a complete high-end PC on a chip. Then add your software.

      Of course, these are both ridiculous extremes. Everything depends on the TYPE of operations being done. In a CPU a simple 32-bit multiply can be done with one character ("*"). In gates, if you need the answer in a single clock cycle, it can take an EXTREME amount of logic. However, if you are willing to wait 32 clock cycles for the answer, the amount of logic is reduced to a very manageable level. This is why C++ is a bad choice of input. How time-sensitive is it? Hardware is also very parallel in nature. Different parts of the chip can indeed be working on different things at the same time. You can go for a strictly pipelined architecture where each block does one little bit of the job and passes it off to the next block. High throughput, but lots of gates. Or you could design a general-purpose block and have it to everything slowly (the most extreme example of this approach is a common CPU).

      While I have heard of magic "C to gates" compilers, after almost 15 years in the business, I have never actually seen one. The closest that I have seen are tools that can turn Matlab code into (messy-looking) gates. If your algorithm is DSP in nature, this is a very viable alternative. Otherwise, the only advice that I can give you is to consult somebody who does hardware design for a living (like me).

      Otherwise, you really need to look at where the input comes from, where the output goes, and how fast you need to do the work.

      • Re:Verilog (Score:5, Interesting)

        by harrkev ( 623093 ) <kevin.harrelson@ ... om minus painter> on Wednesday January 08, 2014 @06:14PM (#45902003) Homepage

        Oh, one more thing about "C to Gates" compilers. In the industry I have not seen one in actual use, but they do supposedly exist. However, they would only work in a limited domain.

        For example, if you have C++ that does simple control or DSP-type stuff, then it might work (cannot vouch for the quality of the results). On the other hand, if you get one of these compilers and try feeding it the source code for the Apache web server or the Quake engine source code, you are completely screwed.

        If your application is, say, a novel type of network filter that inspects and does something to Ethernet packets, you have to figure out how to interface your design with a real Ethernet SerDes .. which is a *LOT* different than opening up something in the "/dev/" directory. If your application is robotics, then you also need to get data into and out of the chip. How exactly is this done? How fast does the logic need to run? Is it speech processing? If so, then this will involve a lot of straight-forward DSP. If you constrain the design to tell it how fast the data needs to flow through, you should be able to get a reasonable estimate. Does your application need a lot of memory? If so, you might need some type of RAM controller. DRAM controllers can be hairy to work with, and you also have to consider latency and throughput.

        In theory, C to gates can work quite well, ***for a limited subset of applications***.

        HOWEVER: as others have pointed out, anybody who needs to know the answer to this question should be qualified to answer it for themselves.

      • Verilog syntax was designed specifically to make it similar to C syntax, so I have to partially disagree with you on that note. A lot of software engineers do understand basics of system design, as well as some basics of parallel processing. There is indeed a learning curve on Verilog, but I'd say the vast majority of it is learning how to create effective test benches, not writing the system logic itself.
        • Re: Verilog (Score:5, Insightful)

          by harrkev ( 623093 ) <kevin.harrelson@ ... om minus painter> on Wednesday January 08, 2014 @06:48PM (#45902223) Homepage

          I still must disagree. Yes, the syntax is somewhat like C. However, WHAT you are coding is completely different. In particular, things that C and do with a simple "if" statement are not allowed at all in proper gate design. It is not hard to imagine a software guy coding latches all over the place, assigning the same signals from withing different always blocks, etc. Even "always @(posedge clock)" may be a fundamental paradigm shift for a software guy. And not to mention the rather arbitrary way that Verilog treats wire vs. reg.

          wire a = b & c;

          wire a;
          assign a = b & c;

          reg a
          always @(*) a = b & c;

          These three constructs do the same thing. Why is one "wire" and one "reg"?

          What is the difference between the two blocks (they are NOT the same - blocking vs. non-blocking)?

          always @(posedge clk) begin
              a = b;
              c = a & b;
          end

          always @(posedge clk) begin
              a = b;
              c = a & b;
          end

          What about race conditions? Glitches on combinatorial logic? Proper coding of state machines? Need memory? How do you drop in an encrypted 3rd party DDR controller and PHY? Interface with AHB bus? In a given process, how many levels are logic are reasonable for a given clock speed? What exactly are hold violations?

          I am not saying that any of these are insurmountable. What I am saying is that a good digital designer is worth paying for, and a software guy may have a very steep learning curve indeed.

    • Ya, FPGA is a good start, but you often need experts to redesign the algorithm for hardware. Ie, you will be able to do much more parallism than in software (fine and coarse grained, maybe pipelined dataflow, vector operations, etc). Software as an algorithm usually has very little parallelism unless using a language intended to show the parallelism.

      Maybe consider if part of the algorithm can be better done with a DSP chip as well.

      As for how many gates, well as many gates as it takes to have an 8 bit CPU

  • by Anonymous Coward on Wednesday January 08, 2014 @04:36PM (#45901101)

    You'd think the "electronics manufacturer" would have some idea how to estimate this.

    • by janeuner ( 815461 ) on Wednesday January 08, 2014 @04:47PM (#45901231)

      They do have a way. They asked if it had already been determined.

      The correct response is, "We don't know."

    • Because manufacture doesn't necessarily mean design expertise?

      Warning: Car analogy inbound.
      Why can't the workers on the assembly line of a GM plant design a car?

      • by Sarten-X ( 1102295 ) on Wednesday January 08, 2014 @05:27PM (#45901581) Homepage
        Because they're robots with no AI functionality?
      • Re: (Score:3, Informative)

        by Megane ( 129182 )
        A more accurate car analogy would be GM wanting to build a car using your technology and asking you how many assembly line workers it would take.
      • by nebular ( 76369 )

        Why are the workers on the assembly line speaking to anyone about the design of the car. The engineers who design or maintain the plant should be speaking to the artists and engineers who designed the car.

        So the engineers who know how to deisgn chips should be speaking to the programmers who made the algorithm. If those engineers are unable to translate an algorithm into silicon I'd be very worried about that company.

  • by mbadolato ( 105588 ) on Wednesday January 08, 2014 @04:36PM (#45901107)

    Make up a number, then when they complain that it was way off, blame it on their management changing scope a hundred times throughout the life of the project!

  • C to HDL to netlist (Score:2, Informative)

    by Anonymous Coward

    As a first-order approximation, you can translate your C/C++ code to a hardware description language (HDL) such as VHDL or Verilog. Tools exist for this process. The result won't be optimized for the peculiarities of HDL, but it will provide a good start. From there, you can port the HDL to a Xilinx or Altera FPGA netlist using vendor-specific tool chains. The porting effort will summarize the logic and memory resources of your implementation. Any digital hardware engineer worth their salt should be able to

  • by Anonymous Coward on Wednesday January 08, 2014 @04:40PM (#45901147)

    I think you may have a better chance of getting an answer if you ask this question on Stackoverflow (or one of its related sites).

    Unfortunately, I think asking on Slashdot is only likely to get you some tired and outdated memes / jokes...

  • by pavon ( 30274 ) on Wednesday January 08, 2014 @04:40PM (#45901151)

    If they plan on implementing this in hardware, then they should have people who are capable of answering that question. If instead, they are just a manufacturer and aren't capable of doing the actual hardware design, then you have bigger problems than answering this question. That is something you should find out about ASAP.

    • Maybe they can do the translation, but they need a number for how many gates so that they can give a number for how many dollars.
      • If they can design the hardware, they can ask for the source and supply the quote themselves.

        If they can't, then OP needs to understand they have no practical design capabilities and plan on paying someone else to design it---before paying these guys to manufacture it. Or he can search for a shop that can handle both the design and the manufacture.

  • Minecraft (Score:5, Funny)

    by nbetcher ( 973062 ) <nbetcher@nOSPAM.gmail.com> on Wednesday January 08, 2014 @04:41PM (#45901157)
    Develop out the algorithm in Minecraft using ProjectRed (Integration module, specifically) and then you can easily count the gates! :-)
  • by dskoll ( 99328 ) on Wednesday January 08, 2014 @04:42PM (#45901167) Homepage

    The question "How many gates does it take to implement this algorithm?" is stupid. It's like asking "How long is a piece of string?"

    There will always be a time/space tradeoff, even with translating an algorithm to hardware. You can save time by throwing more gates at the problem to increase parallelism, or you can save space by reusing gates in sequential operations.

    • Yes, theoretically, according to Turing, you could get by with enough gates to make a couple of registers, a goto/jump instruction and a branch if is-zero test, as long as you have some read-write memory somewhere else.

  • by Animats ( 122034 ) on Wednesday January 08, 2014 @04:42PM (#45901177) Homepage

    You need a C to VHDL translator. Here's a tutorial for one. [xilinx.com]

    Only the parts of the algorithm that have to go really fast need to be fully translated into hardware. Control, startup, debugging, and rarely used functions can be done in some minimal CPU on or off the chip. So, for sizing purposes, extract the core part of the code that uses most of the time and work only on that.

    • by Trepidity ( 597 ) <delirium-slashdot@@@hackish...org> on Wednesday January 08, 2014 @05:08PM (#45901403)

      One caveat to going this route: if the algorithm contains well-known operations as building blocks, you probably don't want to synthesize your own VHDL versions of those standard operations, since they already have highly optimized hardware implementations. For example, if one step of the algorithm is "compute an FFT", you probably want to use an existing FFT IP core [ipcores.com] to implements it, rather than translating some FFT C code to new VHDL.

      At one extreme, where the algorithm is nothing but a chain of such cores (common in DSP applications), you could get a rough estimate just by looking up the gate counts for each operation and adding them up.

    • I'm worried about dryriver's "electronics manufacturer", that kind of skill and knowledge should be a core competancy of any business that makes custom app chipsets

    • How did this get modded up to "Informative"? This is misinformation. If you believe what an FPGA vendor tells you about their tools then I have some land in Florida you might be interested in. There is NO push button path from C to hardware, unless you consider compiling the C into object code that is burned into ROM as a hardware solution. Yes, there are tools like Cynthesizer from Forte and the cited tool from Xilinx that use C as an input language, but it is gerrymandered C geared toward synthesis, not "
  • VHDL (Score:3, Informative)

    by Anonymous Coward on Wednesday January 08, 2014 @04:42PM (#45901181)

    Implement the algorithm in VHDL and test it on an FPGA. I would imagine you'll need to pay someone $$$$$ to do that for you...

  • by neutrino38 ( 1037806 ) on Wednesday January 08, 2014 @04:43PM (#45901197) Homepage Journal

    Hello,

    It is probable that you can break down your algorithm -(I do not mean code) into a pipeline of elementary processing and find implementations (IP) for each of them.

    to give out an estimate:
    - subdivise your algorithm into simpler pieces
    - find for each simple piece how it can or could be implemented in hardware and the complexity of each piece.
    - do the sum.

      Indeed an hardware designer or consultant would be of a great help here.

  • HLS (Score:4, Informative)

    by orledrat ( 3490981 ) on Wednesday January 08, 2014 @04:45PM (#45901213)
    What you want to do is called high-level synthesis (going from C to hardware description language (HDL) to generating gate-lists from that HDL) and there's plenty of software to do that with. A neat open-source package for HLS is LegUp (http://legup.eecg.utoronto.ca/), check it out to get an idea of what the process consists of.
  • by cjonslashdot ( 904508 ) on Wednesday January 08, 2014 @04:46PM (#45901225)
    It's about more than gates. It is about registers, ALUs, gates, and how they are all connected. There are many different possible architectures, so it depends on the design: some designs are faster but take more real estate. There are algorithm-to-silicon compilers (I know: I wrote one for a product company during the '80s and it is apparently still in use today) but each compiler will assume a certain architecture. I would recommend one but I have been out of that field for decades.
  • by Anonymous Coward on Wednesday January 08, 2014 @04:48PM (#45901241)

    Here is a proven method for calculation.

    If your code is:
    a) C: divide the number of lines with 7
    b) C++: divide the number of lines with 5
    c) Ruby/Python/Java: divide the number of lines with 3
    d) Perl: multiply the number of lines with 42
    e) C#: resign.

  • The question seems so ill-posed that one has to wonder if there's a product or service advert lurking... but assuming this is real.

    Software doesn't automatically translate directly to hardware. As others have noted, break out the algorithmic core from the setup and finish. Presumably there is some part of the code which is the most critical in steady state. Describe that to their hardware engineers in whatever depth is required. Depending on the algorithm, the ASIC library elements available (or FPGA units,

  • Mod this offtopic if you want but now I can't see my comments, I can't see if anyone has responded to them, and it has become almost impossible to participate in discussions as a result. WTF, /.?

  • Haven't tried it, but Cadence's C to Silicon might be up for the job. Also keep in mind that in hardware you have very different requirements than in software, and parallellisation has interesting effects on the number of gates. The best option is to get an EE, preferably with experience in digital design, to take a look at it. Other options are SystemC compilers, but they're not really up to production use yet as far as I know. And it is also very technology dependant, sometimes complicated logical functio
  • Clearly it's not possible to render a software program as hardware. If everyone who explained the process (use Verilog) above is correct, that would mean that the exact same algorithm exists as both hardware and software.

    We can't have the same algorithm exist as both hardware and software, because that would mean algorithms are hardware just as much as they are software.
    that would mean all the people whining about "software patents" may as well be whining about unicorns. I hereby declare Verilog,

  • by swm ( 171547 ) * <swmcd@world.std.com> on Wednesday January 08, 2014 @05:03PM (#45901371) Homepage

    You already have your algorithm running in electronic hardware, right?
    Your current gate count is the sum of
      * the gate count of your CPU
      * the gate count of your RAM
      * the gate count of your program ROM

    So that's an upper bound on the gate count.
    If that number is too big for your manufacturing partner,
    then you have an optimization problem.

    Optimization is a hard problem...

  • Accurate answer (Score:4, Informative)

    by Sarten-X ( 1102295 ) on Wednesday January 08, 2014 @05:04PM (#45901383) Homepage

    Write out the truth table for each output as a Karnaugh map [wikipedia.org] incorporating every input. Count the number of gates needed to solve the map, and that's your answer for that output bit. Repeat for every other output bit. Add all those numbers together, and that's a fair estimate of how many gates you'll need.

    Of course, this method requires that your number of input bits must be fairly small. Don't forget that memory counts as both input (when read) and output (when written). For nontrivial applications, you'll find that the number of gates quickly approaches "a lot".

    • by mrego ( 912393 )
      Since they are translating a program/algorithm into circuitry, they need only to know the maximum number of gates that are used at any one cycle time (taking into account necessary time delays), so just adding all the gates per operation way over states the answer since and, not, or, etc. gate circuitry can be reused for different operations at another cycle time. Also, as for logic operations, run it through a Quine-McClusky optimization as well to minimize them.
  • by Yoik ( 955095 ) on Wednesday January 08, 2014 @05:09PM (#45901409) Journal

    It doesn't take many gates for a Turing machine that will run your algorithm but it's likely to be slow. A proper hardware implementation will optimize everything and be as parallel as possible.

    The problem as stated isn't adequately constrained.

  • You guys are probably trying to get a manufacturer to make Scrypt-mining ASICs.

    • by neiras ( 723124 )

      Yep, first thing I thought too.

    • Bingo. And the hardware guys recognized it immediately too. Mainly because they're probably getting emailed the same question 10 times a day.

      There's a reason scrypt asic has been a long time in the making, it's memory intensive. Only alpha-t.net seems to be making headway to a viable product. But even with them taking preorders now, nothing is written in stone. Commercially and technically, SHA asic was an easier cat to skin.

  • by ttucker ( 2884057 ) on Wednesday January 08, 2014 @05:16PM (#45901475)
    Do not ask a computer scientist to be an electrical engineer.
    • Do not ask a computer scientist to be an electrical engineer.

      And for the sake of Pete's dragon don't hand him/her an electric screwdriver! Chaos will ensue.

    • Do not ask a computer scientist to be an electrical engineer.

      Except ... Wow. An early course in my computer science curriculum [csulb.edu] was:

      201. Computer Logic Design I (3)
      Prerequisite: MATH 113 or equivalent all with a grade of "C" or better.
      Basic topics in combinational and sequential switching circuits with applications to the design of digital devices. Introduction to Electronic Design Automation (EDA) tools. Laboratory projects with Field Programmable Gate Arrays (FPGA).
      (Lecture 2 hours, lab 3 hours) Letter grade only (A-F).

      (We used Verilog and a Xilinx FPGA board.) I'm

  • Theoretical answer:
    Recode your algorithm in SystemC (a c++ library that can be used to implement a register transfer language representation of your algorithm) and synthesize it with one of the available tools (e.g., Accelera, Synopsys, Calypto, etc) targeting a typical library (e.g, 28nm TSMC), at a particular clock frequency.

    Practical answer:
    Ask someone with hw design experience to estimate it for you...

    FWIW, nobody wants an "exact" size in logic gates, all they want an idea in complexity. The big ticket


  • For what it is worth, here is a circuit I developed to see what the gate configuration (nor only) would look like for the implementation of a condition that the input switches be:

    0
    1
    01
    11
    http://www.neuroproductions.be/logic-lab/index.php?id=3699 [neuroproductions.be]

    you know, the counting integers 0,1,2,3 - the same code that I have on my luggage. I thought there might be a fun implementation related to security or something - a hybrid mechanical/electronic locking system.

    It turned out to be super hard for me to
  • "It takes one gate that accepts our input and outputs a desirable answer. We would like you to design that gate."

  • Then, stick your pinky into the corner of your mouth and do your best evil laugh!

  • by Asmodae ( 1155077 ) on Wednesday January 08, 2014 @05:24PM (#45901555)

    There's been several people who suggested using a high-level synthesis tool to convert your software (c/c++) directly to HDL (verilog/VHDL) of some kind. This can work and I've been on this task and seen it's output before. The catch is; unless that software was expressly and purpose written to describe hardware (by someone who understands that hardware and it's limitations and how that particular converter works), it almost always makes awful and extraordinarily inefficient hardware.

    Case in point - we had one algorithm developed in Simulink/Matlab that needed to end up in an FPGA. After 'pushing the button' and letting the tool generate the HDL, it consumed not just 1 but about 4 FPGAs worth of logic gates, RAMs, and registers. Needless to say the hardware platform only had one FPGA and a good portion of it was already dedicated to platform tasks so only about 20% was available for the algorithm. We got it working after basically re-implementing the algorithm with the goal of hardware in mind. The generation tool's output was 20 times worse than what was even feasible. If you're doing an ASIC you can just throw a crap-load of extra silicon at it, but that gets expensive very quickly. Plus closing timing on that will be a nightmare.

    My job recently has been to go through and take algorithms written by very smart people (but oriented to software) and re-implement them so they can fit on reasonably sized FPGAs. It can be a long task sometimes and there's no push-button solution for getting something good, fast, cheap. Techies usually say you can pick two during the design process, but when converting from software to hardware you usually only get one.

    Granted this all varies a lot and depends heavily on the specifics of the algorithm in question. But the most likely way to get a reasonable estimate is going to be to explain the algorithm in detail to an ASIC/FPGA engineer and let them work up a prelim architecture and estimate. The high-level synthesis push-button tools will give you a number but it probably won't be something people actually want to build/sell or buy.

  • While an interesting question (I didn't even know hardware manufacturers were in the habit of converting software into hardware), why don't they figure it out themselves? They must have the tools/people to do it. Are you afraid they'll "steal your algorithm" if you give them the source? (that's much less interesting)

  • Look at Mathworks. They have a solution for this.
    • They have a tool that can do this, I don't know if I'd call it a 'solution' just yet though. We've just finished ripping out all the 'solution' for our project because we wanted a device that was actually small enough (and thus cheap enough) to be able to sell.

      It takes input designed to be hardware and makes good hardware. It takes input designed to be software and makes shit hardware. It also doesn't handle version control very well, you need proprietary tools to even VIEW the design files... and th
  • Computers are so cheap and low power today that turning an algorithm into gates would be a silly way to proceed. So the question is not really relevant except academically.

  • We (ConcurrentEDA.com) have developed a tool call Concurrent Analtyics that analyzes a program's x86 code and estimates the gate count. This tool works for Xilinx and Altera FPGA chips and provides an upper bound since logic optimization reduces the gate count. Essentially, we have an extensive library of all software assembly instructions and their gate count in an FPGA. Synthesizing software into a chip requires more work but we have an internal tool for that as well. We translate x86 into a hardware de
  • by SplawnDarts ( 1405209 ) on Wednesday January 08, 2014 @06:03PM (#45901893)

    Knowing what algorithm you want to run in hardware in not even close to enough to estimate gates. You need to know the algorithm, and the required performance, and have a sketched out HW design that meets those goals. THEN you can estimate gate count.

    For a simple example of why this is, consider processors. A 386 and a Sandy Bridge i7 implement very similar "algorithms" - it's just fetch->decode->execute->writeback all day long. If you implemented them in software emulation, it would be very similar software with some additional bits for the newer ISA features on the i7. But a 386 is about 280 THOUSAND gates, and the i7 is about 350 MILLION gates/core - three orders of magnitude different. Of course, there's at least a 2 order of magnitude performance difference too - it's not like those gates are going to waste.

    Point is, knowing the algorithm isn't enough to get even a finger in the wind guess at gate count. If you need an answer to this question, you need to get competent HW design people looking at it.

  • The answer is "too many"

  • "You cannot directly interpret a software algorithm to hardware." Why? Here are the follow ups: What type of hardware, FPGA, GPU, custom ASIC? What part of the algorithm NEEDS to be in hardware to gain performance over basic system resources (CPU, GPU)? Who is going to pay for this little experiment?

    As others more qualified have already stated, you rarely if ever get a direct translation nor do you always need to interpret the entire algorithm to hardware. For a hardware manufacturer to even ask the questio

  • I don't know how easy it would be to port your specific algorithm, but I did my masters thesis around a language called Handel-C. It's a super-set of C that provides a high level FPGA programming interface. That might get you some distance in determining the number of gates. Disclaimer: I was working with it a few years back and the documentation/support was appalling, I don't know if it's become any better.
  • by Wierdy1024 ( 902573 ) on Thursday January 09, 2014 @06:21AM (#45905221)

    I shall give it a go.

    First up, most algorithms can't be directly translated to hardware without either changing them or taking a serious performance hit.

    Nearly all widespread algorithms (eg. H264 video) are designed specifically with a hardware implementation in mind, and in fact must usually have elements removed that would produce good results simply because it wouldn't be sensible to implement in hardware.

    In particular, in hardware, loops that iterate an unknown number of times are generally not allowed.

    Steps to make this estimate would probably be to take your code and 'flatten' it (IE. Rewrite it to avoid all use of pointers, except arrays).

    For every variable, figure out how many bits wide it needs to be(IE. What is the smallest and largest possible value). You probably want to convert floating point to fixed point.

    Next, to make a lower bound of how many gates would be used if you were to design for minimal gate use, take every add and subtract operation and call them 15 gates per bit. For every multiply call it 5 gates per input bit squared. Don't do division (division can be done as a multiplication by the inverse of a number).

    For the upper bound, do the same, but multiply by the number of times each loop goes round. That gives you a design with lots more gates but much higher performance.

    For the upper bound finally add on 5 gates for every bit of every variable times the number of lines of your input code. This approximates the d type flip flops for storage in a pipeline. Note that if two lines of code operate on entirely different variables, you can call them the same line as far as this metric goes.

    For the lower bound, if you got a value greater than 10000 plus 16 times the number of bytes that your program is compiled plus the ram it allocates to run, it would be more gate efficient to put in a tiny processor and keep your algorithm in a ROM. (Lots of complex algorithms are implemented this way when space is at a premium).

What is research but a blind date with knowledge? -- Will Harvey

Working...