Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Can .NET Really Scale? 653

swordfish asks: "Does anyone have first hand experience with scaling .NET to support 100+ concurrent requests on a decent 2-4 CPU box with web services? I'm not talking a cluster of 10 dual CPU systems, but a single system. the obvious answer is 'buy more systems', but what if your customer says I only have 20K budgeted for the year. No matter what Slashdot readers say about buying more boxes, try telling that to your client, who can't afford anything more. I'm sure some of you will think, 'what are you smoking?' But the reality of current economics means 50K on a server for small companies is a huge investment. One could argue 5 cheap systems for 3K each could support that kind of load, but I haven't seen it, so inquiring minds want to know!"

"Ok, I've heard from different people as to whether or not .NET scales well and I've been working with it for the last 7 months. So far from what I can tell it's very tough to scale for a couple of different reasons.

  1. currently there isn't a mature messaging server and MSMQ is not appropriate for high load messaging platform.
  2. SOAP is too damn heavy weight to scale well beyond 60 concurrent requests for a single CPU 3ghz system.
  3. SQL Server doesn't support C# triggers or a way to embed C# applications within the database
  4. The through put of SQL Server is still around 200 concurrent requests for a single or dual CPU box. I've read the posts about Transaction Processing Council, but get real, who can afford to spend 6 million on a 64 CPU box?
  5. the clients we target are small-ish, so they can't spend more than 30-50K on a server. so where does that leave you in terms of scalability
  6. I've been been running benchmarks with dynamic code that does quite a bit of reflection and the performance doesn't impress me.
  7. I've also compared the performance of a static ASP/HTML page to webservice page and the throughput goes from 150-200 to about 10-20 on a 2.4-2.6Ghz system
  8. to get good through put with SQL Server you have to use async calls, but what if you have to do sync calls? From what I've seen the performance isn't great (it's ok) and I don't like the idea of setting up partitions. Sure, you can put mirrored raid on all the DB servers, but that doesn't help me if a partition goes down and the data is no longer available.
  9. I asked a MS SQL Server DBA about real-time replication across multiple servers and his remark was "it doesn't work, don't use it."
This discussion has been archived. No new comments can be posted.

Can .NET Really Scale?

Comments Filter:
  • by christoofar ( 451967 ) on Friday July 18, 2003 @06:49PM (#6474977)
    ... but Unix/Java programmers aren't. Wanting to write the code for free, too?
    • by wfberg ( 24378 ) on Friday July 18, 2003 @06:59PM (#6475055)
      ... but Unix/Java programmers aren't. Wanting to write the code for free, too?

      Seems to me swordfish is going to be coding it anyway. I'm sure he can figure out how costly it is to retrain him to program Unix/Java.

      Having said that, colleges/universities are churning out Java programmers at an alarming rate. And seeing how unemployment is only rising (lots of experienced people on the market) newcomers are really, really cheap! (They're used to living like.. well.. like students!)

      Also, is programming for this new-fangled .Net thingy really that much easier than programming for "Unix/Java"? Or is that a delusion caused by smoking crack? Surely coding for platforms that have been around for years upon years is a no-brainer moreso than programming for, from the way swordfish describes it, a hideously unstable inefficient platform?

      Now, I agree that finding reasonably adept administrators for windows is much easier, and cheaper, than finding ace Unix admins. But that doesn't say anything about coders.

      If swordfish is doing a feasibility study on this, for Pete's sake, suggest an alternative with less Microsoft in it! Any reason why that server should be .Net if all it spews out are more or less standard webservices messages?
  • Solution (Score:3, Insightful)

    by Synithium ( 515777 ) on Friday July 18, 2003 @06:51PM (#6474988)
    Apache, FreeBSD and a cluster of 10 or so $1k servers and a nice DB server running PostgreSQL.

    Works for me.
    • What language/libraries do you use to develop applications? What kind of performance is typical for you?

      • We have switched from Windows Svr 2k and ASP to Apache 2 and PHP 4 on the front end. On the back end we use java 1.4 and broke our application apart to run multiple master/slave processes in a tree system (Process A, Master I, Machines a-d. Process B, Master II, Machines e-h...) to do data analysis for the requests. (This is a data mining sort of thing with analysis and a search). The DB starting becoming a bottleneck after we got up to 200 concurrent processes, which we fixed by breaking apart the DB
      • FreeBSD + Apache 1.3.x can easily do 500+ small requests per second on a pII-400 w/ 512MB of RAM. Add MySQL in to the mix, and with proper code (read: caching, so you don't hit MySQL every hit) it drops considerably, but it's still above the 50-100 hit/s range (if you do it well) :)
  • by abulafia ( 7826 ) on Friday July 18, 2003 @06:51PM (#6474993)
    I hate to say it, I've been too long out of the MS development world. That kind of overhead managed to amaze me.

    I'm deploying systems right now (some buzzword compliant, some (more efficient ones) on lowly little open source, that scale to an order of magnitude higher transaction volume at a fraction of the cost. No, none of them are windows.

    No wonder my company has been doing well in a downturn. (Oh, sorry, we're "recovering" now.)
    • by gmack ( 197796 ) <.ten.erifrenni. .ta. .kcamg.> on Friday July 18, 2003 @07:00PM (#6475067) Homepage Journal
      I hear you.. one of my previous employers had a php/apache system on a dual CPU pIII 500 system with 256 mb ram that easilly handled 500 customers at any given moment.

      Not even a hiccup. then the bright guy tried to do that on windows 2000 for another customer.. choked at about 100.

      I'll take good performance with low spec hardware over the ability to scale on 10+ CPU systems anyday.
    • I think this article was asking for numbers and setup information, and probably a lot of other people would be interested in yours if your claim is true. Please elaborate.

      I'm not trolling; I'm curious.
  • 100 concurrent users isn't a lot.

    What is the web app going to do? All the hardware in the world, and even open source won't help you much if you're trying to do the wrong things on a single machine. Database driven site? Commerce? HEavy read, heavy write, or both?
  • well... (Score:4, Insightful)

    by confusion ( 14388 ) on Friday July 18, 2003 @06:56PM (#6475025) Homepage
    My first inclination is to recommend throwing that $20k at an ASP that can provide the server infrastructure to give you support for 100 concurrent connections.

    Barring that, my recommendation would be to split the web front end and database, spending about $10k on each (using dell or hpq). I can almost gaurantee that you aren't going to get 100 concurrent connections for less that $80k to $100k without doing some sort of load distribution. If you strip down the amount of dynamic content and say script a refresh of a static page, you might be able to do it, but we don't really know what the app is going to be doing.

  • I don't really know an answer but I will throw in my tidbit.

    But first let me apologize for all the nutheads who say "drop MS - use Linux" and all the derivitives thereof. That doesn't help anyone, and doesn't answer the question. Might as well say "use a dustmop, works great on my floors!".

    My advice would be to *try* and use a cluster of some sort instead of the one server approach. Sure, you can get some great big reliable iron - that is wicked fast... But what I have found is that scaling really nee
    • You say:

      My advice would be to *try* and use a cluster of some sort instead of the one server approach. .... Of course, the more machines - the more licenses... Good luck!

      What part of this did you not understand?

      the obvious answer is 'buy more systems', but what if your customer says I only have 20K budgeted for the year. No matter what Slashdot readers say about buying more boxes, try telling that to your client, who can't afford anything more.

      Go away.

    • let me apologize for all the nutheads who say "drop MS - use Linux"

      And why, exactly, is this a nuthead reaction? Our original poster has hit some major problems with a lump of technology that are, essentially, entirely financial in nature. Basically he's saying "we've developed in .net but have discovered that we're going to have to spend $big PER CLIENT to roll the bloody thing out".

      Yes, they've been bait'n'switched - and he'll probably do better technology assessments next time, and as you point out th
  • I can see it now- after commanding the drones to switch to Windows 2.003k, they look at the price tag- the jump in overtime, the additional hardware for that "faster" version, the new software licenses...

    President:"But...but...that commercial said it would be cheaper, and it had lots of pretty people doing neat things, with nice music in the background! And the nice representative at the golf tournament said I'd get to have employees walking around with little handheld things that showed our inventory!

  • by Anonymous Coward on Friday July 18, 2003 @06:58PM (#6475050)
    First, you didn't really specify anything except in generalities, but there's a few things that pop out from my experiences:

    1. Why are you wed to C#, especially in regards to triggers? How many tiers exist, and are you pumping a lot of data back-and-forth.

    2. Your scaling numbers are low already, especially under ASP and static HTML.

    3. You never really define concurrent requests. For some people, it means simultaneous requests, and for others, it means simultaneous transactions. But you really are looking at fairly low numbers there, in either case.

    4. Scaling this should involve looking at where you choke. One common choke point that keeps killing people is in open database connections. Are you running a pool? How large? How many connections does a page take? The single most common problem I've seen in scaling is poorly implemented connection pooling, thereby causing a ton of stuff to wait. Check this, check, then check again.

    5. Sync versus Async shouldn't really be coming into play yet on the db.

    6. When designing for light-weight systems, you want to minimize the tiers, and minimize the data passed back and forth. Just by reading this, I'm worried that you created a very elegant, but impractical, system that isn't suited to the hardware limitations.
  • by SamBeckett ( 96685 ) on Friday July 18, 2003 @07:00PM (#6475065)
    This entire story is lacking units.. I am so confused, it is like this...

    "I bought a 400 car from my dealer, who said it could go 0-1200 in 57, but I talked to an auto mechanic and he said that the rpm throttled at 4.5 billion, so I don't know if I should get a turbo charger which would at least boost the speed to 1295!!"

    If you are talking about 100 concurrent request per second: Any DB worth its salt should handle that IFF the database queries aren't too complex. If they are, your schemas suck. This is doubly true on a 3 GHz machine.
    • "I bought a 400 car from my dealer, who said it could go 0-1200 in 57, but I talked to an auto mechanic and he said that the rpm throttled at 4.5 billion, so I don't know if I should get a turbo charger which would at least boost the speed to 1295!!"

      theres no way a 400 can do that in 57, i slapped a new module in my 400 and i could barely do it in 35. you may need to replace your module, just grab it by the flat side and push it your right, your right, not mine.
  • Why don't you just ask MS this question... what? huh? You can't? It's too expensive? They lie? They don't know?

    Then why are you using .NET.
  • 2. SOAP is too damn heavy weight to scale well beyond 60 concurrent requests for a single CPU 3ghz system.

    It doesn't sound like you're talking about .NET specifically, but just SOAP in general. Make sure you separate out the platform from the product. Saying web services with SOAP won't work is a long way away from saying .NET doesn't scale.

    3. SQL Server doesn't support C# triggers or a way to embed C# applications within the database

    Embedding applications in the database violates basic scaling principals: you need to separate out into n-tier, right? You don't want the database server doing anything but serving databases. Now, having said that, Yukon (the next version of MS SQL) will indeed let you do certain things in the database with .NET languages, but that's rarely going to be a way to make your system run faster and scale more. Plus, I'm confused - what's your alternative? What database are you going to recommend that allows you to embed C# (C++, whatever) programs in the database itself?

    9. I asked a MS SQL Server DBA about real-time replication across multiple servers and his remark was "it doesn't work, don't use it."

    Sounds like it's time to get a more informed consultant who can demonstrate failure or success beyond a throwaway line. I'm not saying replication does or doesn't work, but you can't base your enterprise plans on a single line from a single guy - let alone strangers like me on Slashdot. Furthermore, this isn't a .NET question, it's an SQL question.

    It's easy to make big decisions if you break them up into a series of smaller ones. Look at each of your questions and decide if it pertains to .NET, or just a particular product. You might go with .NET and not use MS SQL Server, for that matter.
    • > Plus, I'm confused - what's your alternative? What database are you going to recommend that allows you to embed C# (C++, whatever) programs in the database itself?

      Oracle 9i and maybe 8 allows you to use Java for stored procedures.

      It has some performance improvements over PL/SQL but I never really thought that it was useful. More like "another shiny button" added to a product.
    • But you aren't exactly right either.

      You are simplifying when you say to not 'embed applications' in the DB. I will interpret 'embedding applications' in the DB as doing business logic in the database.

      Many times it is more resource efficient for the _database server_ to perform some of the business logic in the _database server_.

      It can be more efficient for the database to do some operations which results in a relatively small result set rather than pushing a lot of data up to the application server.

  • SQL Server doesn't support C# triggers or a way to embed C# applications within the database

    Actualy, SQL lets you embed actual binary code in the database, using 'extendedstored procedures'. You load up a DLL and the code runs inside MSSQL's memory space. (using a dll) Obviously its risky, but probably pretty fast. You could probably write a .net dll and use that in an extended stored procedure.
  • .NET Benchmarks (Score:5, Informative)

    by fine09 ( 630812 ) on Friday July 18, 2003 @07:04PM (#6475114)
    I have been Developing a .NET Portal Application for the past few months. I ran a quick Test on our application just to see how it would run.
    Specs Are as Follows:

    App Server:
    Duron 800
    512 MB RAM
    40GB HD 7200RPM

    DB Server:
    Celeron 500
    640 MB RAM
    20GB HD 7200RPM

    As you can see, these are not server class machines, but they seem to run the app alright. I ran a simulation of this application based on the IBS Portal [] running 150 Concurrent Requests Per Second:

    The average Requests per second on this app were 98.51. So, IMHO on low quaility hardware, the .NET platform can handle about 100 Requests per second before it starts to get hot.
  • by mmurphy000 ( 556983 ) on Friday July 18, 2003 @07:06PM (#6475134)
    You're bound to get lots of responses of how to scale the system up. I'll focus on scaling the requirements down.

    Unless the transactions are really long, "100+ concurrent requests" as a sustained rate is a lot of activity for a small business. So, that begs questions:

    -- What percentage of these Web service requests are read-only "query" style, and can you use application-aware caching to return results out of RAM instead of having to hit disk for each one?

    -- What is the client to this application, and can there be ways to help induce a smoother load from them (e.g., discount rates if the application is used in off hours or on weekends)? Or is the 100+ concurrent requests going on 24x7?

    -- Do all the requests have to be filled by the server, or can you blend in some P2P concepts so the clients can absorb some of the load?

    -- Can you increase the amount of data handled per transaction (perhaps by switching to document-style SOAP or REST instead of RPC-style SOAP) and thereby reduce the number of requests and excessive message parsing and marshalling?

    There's probably a bunch other things to do as well, but those came to mind off the top of my head.
  • So, the customer demands the ability to serve 100 concurrent users. I can imagine two scenarios:
    1. They have 100+ employees and will be using it for internal services. At minimum wage, that means they're paying at least $515/hour in payroll. A $20,000 server will cost them a workweek's worth of capital.
    2. They're running an ecommerce site and expect a lot of traffic. At 100 simultaneous users, assuming a 1% sell rate after 10 minutes of shopping, you're looking at 6 sales per hour. If the item costs $.00
    • Two other thoughts:

      swordfish told them to use .net, asp, etc for whatever reason, and is now discovering the error of his ways.

      swordfish got into a contract where he was told "use .net and make it work or you won't get paid, or get the bonus, or whatever."

  • by binaryDigit ( 557647 ) on Friday July 18, 2003 @07:10PM (#6475172)
    You don't really describe the kind of apps you will be running to know if your observations matter in the slightest. You say that you get poor performance when your app does a lot of reflection, why is it doing reflection? Is this really a need, or are you just doing it "because you can"? Are you using this app when you further state that your performance drops by a factor of 10 vs static html? Why would you be comparing the two anyway? If you're serving static pages you shouldn't be looking at a webservice anyway, so no real sense comparing the two.

    You mentioned db issues, what type of access are you doing with your databases? Are you thinking replication to deal with scaling across a server farm? Is this data being constantly updated by the servers, or is it mainly static? If you have simple primarily read only data, then something like mysql would be a far better choice, you just don't need the overhead of a full blown db server (like sqlserver, or oracle or even postgres).

    Really what you need is to identify what your requirements are and tailor the end result to the systems that best meet those requirements. This also includes support and things like backups (e.g. can the db you choose do online backups if that's a requirement, etc).
  • by AndersDahlberg ( 525342 ) on Friday July 18, 2003 @07:12PM (#6475194) Homepage
    1, Buy *a lot* of memory for the box
    2, Cache as much as you can of the dynamic content
    3, try to stay away from bloated protocols

    1: Java, .NET is the same but different - they both require a hefty amount of ram to operate at best performance (and atleast java just gets better the more memory that is available on the server ;)

    2: Maybe doesn't help much with scalability, performance will go up though - and maybe you might get good enough scalability too. Database access is always slower than a hashmap lookup (if said hashmap can stay in ram ofcourse)

    3: Web-services etc etc are maybe good in theory but at the moment those technologies are a duck in a pond when it comes to scalability and performance. Use a highperformance .net remoting implementation instead - you can probably find a few with a quick google search (IIOP comes to mind, good way to make future interfacing with other technologies available just a easy as with webservices/soap and gaining better performance in the bargain).

    Also investigate how much you can make your site use asynchronous notifications, more is better - even if ms messaging client is too bad, you can write your own asynchronous "protocol".
  • by rcw-home ( 122017 ) on Friday July 18, 2003 @07:13PM (#6475206)
    ...the overhead of the framework for your code contributes only a small percentage to the total system load.

    In other words, it's not what you're using to do it, it's how you're doing it. If you're just pumping out files to clients on modems, 100+ concurrent requests isn't much. If those requests are all CPU-bound, I hope they're all niced or set to a low priority, otherwise you won't be able to log into the machine in a reasonable amount of time. If it's 100+ concurrent connections, but those connections aren't necessarily waiting for a response (just idle until the user does something) then you might not even care.

    How many whatevers you have must always be qualified by knowledge of what those whatevers are doing. Otherwise your whatevers won't fit in your $20k thingamajig. And then Mr. Bigglesworth gets upset.

    Of course, whether .NET is a properly-implemented system is a separate debate...

  • by metacosm ( 45796 ) on Friday July 18, 2003 @07:18PM (#6475245)
    Well, first of all -- you really didn't give us enough information to give a good answer, but -- it is a slashdot question, and getting all the info would ruin half the fun!

    Anyway. If you can't support 100 requests a second on 50k of modern hardware, you have huge design issues and other problems. Just from your short description of the project, I fear you have crawled into over-engineered land because alot of the technologies are much more useful on seperate boxes/distributed enviroments.
    • #1. MSMQ is mature. I have seen no evidence that it won't scale. How many messages are you planning on piping through it per request? Why do you need MSMQ for an application running on a single box?
    • #2. SOAP is "heavy-weight" -- I guess I would have to ask what "light-weight" item are you comparing it too? The number you dropped is utter bullshit, but their is going to be an upper limit to it.
    • #3. Yes, SQL Server doesn't support C# triggers or embedding C# apps -- WHY THE HELL WOULD YOU WANT THAT -- seperate your database from your application. Lots of the technology you mentioned in your question are for seperating layers, and then you wanna do something SILLY like cram C# into the database.
    • #4. All requests are not even close to equal, it totally depends on the quality of your requests/stored procedures, how normalized or denormalized your data is -- and how intelligent you DBA is.
    • #5. Very fuzzy question, but on 50k of hardware for a custom develop internal app -- you should be able to scale. Again, how stuff scales depends on how it is written.
    • #6. Uhh, ok, not really a question. Maybe avoid reflection. :)
    • #7. Ummm, just ran a local test on an MS box (2k3) and my numbers didn't drop nearly as much -- check you settings. I went from around 290 rps to 180 rps.
    • #8. Good DBA, optimize stored procs, get the data cached, use sync calls.
    • #9. It used to not work, I have recently got it working, but I still don't trust it, and most DBA's I know don't trust it either. But, test it out, in most of the cases were this is needed/used -- work arounds a very human-work intensive and often work like crap.

    Good Luck. Remember that C# Web apps can be multi-threaded, and remember to optimize the parts of your application that MATTER. A wise man once said "Premature optimization is the root of all evil". Find the slow parts, fix them, get the most bang for buck. Also, remember to keep those pieces loosely-bound to each other, no C# code in the DB!

    P.S. I hope you haven't over-engineered this tool as badly as it sounds like you have :)
    • I have to pick a nit on someone and you're in the wrong place at the wrong time.

      A lot of folks are lambasting this guy because he wants to do C# inside SQL Server. Most are saying, like you, that he just doesn't get it because you should separate the database from the application. That's true but it doesn't invalidate the need to have stored procedures (in any language you want be it PL/SQL or C#).

      The idea behind a stored procedure is that your application may actually scale better by putting some of the logic "close to the data" because there is less contention for machine resources other than CPU.

      For scalability, it's *generally* true that you want no processing to happen on the database because database servers are generally more expensive to scale. However, moving selected bits of logic to the database tier can result in huge scalability improvements.

      It's not one-size-fits-all and unless you have a good working understanding of the problem, which is impossible with the data given, it's probably not a good idea to yell "WHY THE HELL WOULD YOU WANT TO DO THAT" at someone. Give the guy the benefit of the doubt.

      If you think that stored procedure in C# (or any .NET language) isn't going to happen, you haven't been paying attention.
  • This works... (Score:2, Informative)

    Not to be contrary - and I'm certainly a big supporter of Open Source - but this is what works:

    Two cheap boxes, one running the server and the other SQL server, will outperform a single box by a wide margin. SQL Server's a pig and doesn't share well with the other children. Use back to back NICs to the connect the SQL box so there's no network overhead...

    Check the check boxes when you compile your .Net components. Threading models matter. And a stateless contiuously instantiated module is the only s

  • MS SQL replication (Score:3, Insightful)

    by duckworth ( 71247 ) on Friday July 18, 2003 @07:20PM (#6475264)
    "I asked a MS SQL Server DBA about real-time replication across multiple servers and his remark was "it doesn't work, don't use it."

    We are running transactional replication on several large databases (6-14 GB) on a Media Metrix top 50 website with no problems. It needs to be set correctly (batch size, timeouts, etc) but it does work quite nicely. The DB machine is heavy hardware, but it it able to keep up with 12-15 front end webservers, all with applications hitting the DB.
    • by Ececheira ( 86172 )
      6-14GB is a large database? We easyily get 15GB per day of new data in one system that I'm working with; the SAN datastore is in the 15TB range. Oh yeah, and it's all running on Windows 2000 Server with SQL Server just fine.

  • Proper choices (Score:4, Insightful)

    by Godeke ( 32895 ) * on Friday July 18, 2003 @07:21PM (#6475272)
    I find it funny to watch the war between the "why are you suggesting open source crowd" and the "open source is the only way". I have built IIS/ASP/SQL server solutions and I have built Apache/PHP/PostgreSQL solutions. There is a place and time for both solutions.

    As an aside, I have to say that I have avoided .NET so far due to the heavy memory footprint it places on a system. Yes, VB.NET is faster than VBScript, but if you were using compiled COM objects in the first place, .NET costs more memory for a slower system. (I do think that .NET's ability to do in place object updates rocks, but I hope you have a devolpment server for bouncing and PLAN your updates...)

    But more to the point, your customers don't seem to have the budget to succeed in any domain. If you can't afford more than 20K for a machine and licenses, surely you can't afford to pay the programmers an adequate salary either. So does that mean open source? Heck no... you still have to pay the programmers! I don't think I have *ever* seen a project where the programmers were *cheaper* than the hardware.
  • some advice (Score:2, Insightful)

    Google regularly handles way beyond your transaction requirements why not look back in slashdot for the coverage of how google does this?

    Some hints:

    1. Google builds its own servers...

    2. Google then chooses the best OS DB combination..

  • by bertnewton ( 686123 ) on Friday July 18, 2003 @07:28PM (#6475304)

    I am the network admin at a large .Net website (5+ million unique visitors each month) and we often handle hundreds of tens of simultaneous requests. The entire site runs on 6 webservers and two database servers that run at less than 50% capacity during peak times.

    If you can't scale above 100 connections on a 3GHz system then you are doing something wrong. Check your code, check your databases.

    Your question is about as useful as "I have a piece of string that is not long enough, what can I use instead that is longer?"

  • I know... flame me. But ignore taking religious sides for a moment and just look at what their numbers could produce [] for under $37k. They were able to exceed all of your performance requirements including using dynamically generated SQL.


  • by John Murdoch ( 102085 ) on Friday July 18, 2003 @07:38PM (#6475367) Homepage Journal


    Executive summary:

    Boring details:
    I'm goofing off, perusing SlashDot at the end of a dinner break. We're shipping a big project to a customer on Monday--the project is written in .Net (mostly C#, some components in VB), including Windows forms and ASP.Net web pages. (Why both? The project incorporates multiple applications for different kinds of users.) As part of pre-shipment testing we're in the midst of extensive testing, including load testing.

    The Windows applications communicate with the data tier using SOAP/XML, using synchronous messaging. Practically every message involves a database transaction with SQL Server 2000. Across a range of loads we are seeing round-trip message responses (from receipt of the inbound XML message to return from the web service) averaging less than 90 ms per message. That 90 ms average can be misleading--some of our messages involve extensive processing and/or lots of data. Some of the transaction work we're doing with SVG images involve SOAP messages with payloads greater than 1 MB, so the average gets dragged out.

    Based on our testing, we anticipate supporting hundreds of simultaneous users--in a near-real-time environment--from a single web service. As we scale out on larger projects we may need to scale the number of web servers (although IIS on Windows 2003 is supposed to be substantially faster--YMMV), but we won't need to scale the database. Using a similar messaging architecture for a different client I have a project supporting 400+ users on a single SQL Server.

    This is SlashDot, after all...
    Obviously you're going to get a lot of "why not use...?" posts, and I'm sure I'll get flamed for having the temerity to admit to using .Net. And recommending it. But you asked, so I'll answer: .Net is scaleable in terms of the final application, and .Net is scaleable in terms of the size of the development team that is involved. This project involves 19 developers (a total of 60+ individual projects in the nightly build) and we're able to manage the entire thing remarkably well. Developing web service applications with .Net is remarkably easy to do; developing sockets apps is unbelievably simpler than using WinInet.dll. And the web developers are extremely happy working in ASP.Net--I don't know where you heard that ASP.Net is slower than ASP, but that's simply not true. ASP.Net is significantly faster.

    With regard to other comments
    I'm the data/messaging architect on the project: I can speak to the comments about messaging, reflection, and SQL Server. As with any Microsoft-based development project, you have to think carefully, and think critically, about how to design your application. Microsoft will always give you a quick! easy! fun! way to rapidly produce a prototype. You have to dig deeper, and think harder, to produce a scaleable application. The quick! easy! fun! technology du jour is .Net Remoting. Quick to prototype, barks in production. Like OLE, it's a great way to make a Pentium 4 box emulate an original 8086 IBM PC. (Far smarter to manage communication with XML-based messaging. It just takes more coding.)

    That SQL Server doesn't permit triggers to be written in C#--so? Transact-SQL is suitable for database development. We could ask for more (such as integrating stored procedures and other database code into Visual SourceSafe). There is talk that the next version of SQL Server will permit coding in .Net languages--that'd be cool, but I'll wait and see.

    The single most compelling argument for .Net
    Mono []--an Open Source implementation of the .Net Framework. You might look into this particularly for clients that are choking on server pricing--but you might also pay careful attention, because a robust Mono project will encourage/force Microsoft to compete on features and functionality, instead of a take-what-we-give-you mentality. That's a Very Good Thing.

  • oxymoronic (Score:4, Funny)

    by aminorex ( 141494 ) on Friday July 18, 2003 @07:38PM (#6475372) Homepage Journal
    scalable? .NET? This is a troll, right?
  • It depends (Score:3, Interesting)

    by boatboy ( 549643 ) on Friday July 18, 2003 @08:27PM (#6475642) Homepage
    Short answer: Yes, Windows IIS (which serves .NET WebServices) supports well over 100 concurrent connections.

    Long answer: It completely depends on what you are doing. As one person pointed out, if you are performing very complex queries, then scalability would go down. There's plenty of room for bottlenecks.

    One of our ASP.NET applications benchmarks at about 90 concurrent requests on a dual proc 1Ghz xeon. That's with several database reads per request.

    Your question is if ".NET scales", but really you could break your problem into at least three questions:
    1) Does .NET scale well?
    Yes. It scales extremely well, provided you follow best practices and design a scalable app.
    2) Does SQL Server scale well?
    Well, but probably not the best. Again, depends greatly on the design.
    3) Does IIS scale well?
    Well, but definitely not the best. IIS is designed for extensibility and scalability. Obviously they made trade offs in each area. Other servers are be more scalable, but less extensible.

    Given that, I would recommend doing some very simple benchmarks: Write a webservice that returns a hard-coded string. Test that. Next write a service that connects to a database and returns or adds a single record. You get the idea. You can use MS Application Stress Test for this.

    Another option is to use programs like RedGate ANTs and Query Analyzer to track down any bottlenecks in your code and SQL.

    You may also consider options like remoting or even writing your own multithreaded server if you think you can squeeze better performance by implementing a thinner transport...

    Finally, while you may not want to change the web server or development platform, you do have fairly wide range of choices as far as databases go. You could use MySQL backend, or any database you thought was better\cheaper than SQL server.

    In the end, I think this question is too complex to simply blame on ".NET".

    Good luck.

  • Yes (Score:5, Informative)

    by bmajik ( 96670 ) <> on Friday July 18, 2003 @08:32PM (#6475668) Homepage Journal
    A guy down the hall from me was in charge of taking customer web apps written in V6 technologies (vb, asp, etc) and porting them in several ways to .net. They did extensive scalability testing on these apps. They measured requests/sec vs # of cpus, etc etc, to see how utilized multiproc machines.

    What im saying here, is that you are not the first person to ever consider how .net might run for database driven web apps, of an arbitrary size. Infact, much of it is designed for _exactly_ that.

    Re: SQL server 2000
    SQL server 2000 has more performance then you know what to do with, even on non-ridiculous hardware. Give it processors with lots of L2 cache (xeons) and lots of ram, and read all the docs about keeping MDF and LDF files on separate volumes (as well as tempdb) and you'll find that life is thrilling.

    Data point: On a quad HT P4 Xeon with 8GB of ram and 12 spindles (a significantly less than $50k box) we support 1800 simultaneous connections, doing OLTP work against a ~15GB database. The most commonly hit table in the system has about 10 million rows that get added and deleted in batches of between 20 and 10,000, and updated singly or in bulk. Other apps select from this table on a polling basis (i.e. decision monitors). We could make our db and app design much "better" w.r.t performance, but we don't need to - the money we save not having to do genius level feats of programming, app rewrites, and perf tuning more than pays for the occasional new hardware or upgrade.

    Continuing, Run perf monitor on your SQL server machine. Look at the physical spindle(s) that hold your MDF. If you're reading from them, buy more ram until you're not :) Look at the I/O per sec rate to your tempdb disks and primary LDF disk(s). It is seriously to your advantage to go with an individual spindle for each role, because IO rate is what is so critically important to SQL server. Also, avoid RAID5 like the plague, as it decimates IO Rate.

    You can tune SQL server without application changes until you're blue in the face, honestly. Use profiler to see what kind of queries you're doing. Put those queries in Query Analyzer and show the execution plan. QA breaks it down for you and shows execution time percentages of each sub-tree of the execution plan. If you've got something eating 80% of your time and its doing a table scan, do whatever you can to put some selectivity in that query (i.e. an index, or maybe a query change).

    If you want to save yourself some headaches, setup management tasks to recalc indexes over the weekend (or nightly, if you see that much index fragmentation after a day).

  • by His name cannot be s ( 16831 ) on Friday July 18, 2003 @08:36PM (#6475684) Journal
    Holy mother of fscking god.


    #1) If you are using the [WebMethod] shit and hosting your SOAP calls via IIS you need a smack in the head.

    #2) If you are using SOAP to communicate between the layers of your application, and are not exposing the SOAP methods for external consumers of the web services, You need more smacks in the head.

    #3) If you don't know what you are doing, hire someone who does. (and by the sound of your point #6 about using reflectiona and dynamic code in the production app, you don't.)

    If you are in .NET and you *NEED* a remote facility between your layers, (And if you were working for me, you'd damn well prove it), then for the love of god, switch to Remoting. Don't know what that is? Grab a book, dumbass. You can use a binary formatter and jump your speed by an order of magnitude, or you can fall back to a SOAP formatter on remoting and still double your performance.

    If you don't *NEED* a remote facility between the layers, stop using SOAP, or any other remote procedure calling solution. Nothing pisses me off more than bandwagon jumping know-nothings using a fancy fucking hammer to solve a problem which requires far less.

    It would appear the largest problem you have in overcomming your problems with .NET is your own stupidity. No matter if you are on .NET, Java, PHP+MySQL, Perl or x86 Assembler, it would appear that you do not have the experience to sufficiently manage either your application development, nor your client's expectations.

    Bottom line: To support 100+ concurrent requests, There is no way that you shouldn't be able to do that for under 20K... (although I wonder where that number came from.. Do these servers sit in a vacuum? Who's running them?)

    From a purely acedemic standpoint, what the heck were you guys thinking when you were going to spend only 20K on the hardware for an app that does 100+ concurrent transactions. That sounds like enough business to afford quite a heck of a lot more.

    If you are/were so budget constrained, why are you spending at thousands on server software? (.NET server, SQL Server, etc...) If you are so budget constrained, you shoulda bought opensource.
  • by The Bungi ( 221687 ) <> on Friday July 18, 2003 @08:37PM (#6475690) Homepage
    Really, really. I won't add to the many good comments about the topic, but let me say this: if you don't know what you're doing (and from your questions I assume you don't), invest a bit of money and hire a good architect for a couple of weeks. Not only will he/she answer your questions, but will probably get you started on a good design and a decent implementation.

    I've designed infrastructure and application-level systems that use .NET and happily meet your requirements (MSMQ is not scalable? Huh?), and then some. So yes, to answer all your question, it works. But if you don't know what you're doing it's very simple to fuck it up, regardless of whether you're using Microsoft products or not.

    Coming here (!) and asking questions about whether or not a given Microsoft product is viable seems to me like a losing proposition. FWIW, most professionals that work with Microsoft technologies are far more willing to admit shortcomings in those products and suggest alternatives, something that the /. crowd seems incapable of. So at least if you hire someone in the know you won't get BS left and right.

    So get some help.

  • by jalilv ( 450956 ) on Friday July 18, 2003 @09:28PM (#6475964) Homepage has ported their website to .NET. One of the developers of the site, Jason Alexander, has posted a post mortem on his blog []. While they have 45 servers in their web farm running the site, he may be a very important source that can answer your question.

    - Jalil Vaidya
  • by tomq123 ( 194265 ) on Friday July 18, 2003 @10:32PM (#6476203)
    chances are your job is going to get outsourced to India in a few weeks. They can accomplish this task for you and a fraction of the cost.
  • by FatherOfONe ( 515801 ) on Friday July 18, 2003 @11:16PM (#6476361)
    Your first question is can .Net scale? Answer is yes. The second question is can .Net scale on your budget? That is much harder question to answer. My initial reaction is no, given your concernes and the amount of effort you have already put in to it.

    I am by no means a fan of Microsoft. To be honest I hope that your projects dies, and this can be added to the long list of people that I know who bet the farm on Microsoft just to either have far more NT servers than employees, or they go out of business... but I will give my 2 cents.

    You seem to have defined some of the basic bottlenecks of performance. What you appear to leave out is what happens at certain loads. Does the system die? Probably not, but what happens to the response time? What are the acceptable requirements for the system? You may find 25 seconds for a page to load unacceptable, but the users may not. Either way it will let you know what goal you need to hit. Can you configure your DB to use less or more RAM?

    Next, is it for sure processor load that is the issue? My guess is that you would be far better off with an x86 chip with more cache and stronger memory bandwidth than a standard P4. Granted this involves another hardware purchase, but if that becomes an option at all look at an Operton or Xeon chip in a 2 way system. You can get one of those systems well under 4 grand. The Opteron flat out rocks and the new Xeon 3GHZ with 1MB cache should be hitting the streets soon.

    Not knowing much about the dark sides languages (Java is my thing), are you using one database connection throughout your application? Not returning it back to a connection pool, but storing it in the session object? This can have a significant impact on performance.

    Seeing that you said you talked to a SQL Server expert (I have never met one), I will assume that he looked through the code and optimized all the SQL. Everyone seems to be taking cheap shots an d saying you should have used product X, well here is my cheap shot... Next time use Oracle! I repeat next time use Oracle! Ok, it bears repeating one more time.... next time use Oracle. Granted it is expensive, but you are learning a lesson that a ton of shops here in Indy have had to learn the hard way. Well what the heck next time use Java + JBOSS or Resin + Oracle + Linux. In our environment it flat out rocks.

    What else is running on the box? You can buy a sub $500 machine to move all the DNS AD stuff to it. Not sure how much that impacts performance though... it may not be worth it. But my point is to turn off every unused service. Also, I will assume that you have applied every service pack, and called Microsoft. Since you are using ALL their products, you would think that they would help you. God I would love to be in on that call!!! All I ever hear them say is "You need to get off of product x" and use our product.

    Generally what I find to be the issue with performance is SQL and DB access. The code takes around nothing to execute processor wise. Now what kind of DB are you talking about? How many tables and how many rows in each table. What kind of transactions do you do (mostly inserts or querys). Are the indexes setup correctly on the tables? Could you flatten some relationships down?

  • Sounds like a troll (Score:3, Interesting)

    by Anonymous Coward on Friday July 18, 2003 @11:23PM (#6476376)
    But I'll bite. I have a little experience with .NET stuff. From my limited knowledge of a couple high profile blunders in the financial industry, windows is very tough to scale to the levels Unix can (obvious duh). I will qualify this with real details. I'm posting AC for a reason, but you can probably find the details through google. The blunder I heard about was about a company that provides trading systems (OMS/Compliance), which managed to get their foot into two big trading firms in NY. The company went in with their solution which was apparently C++/Ole based app server with some kind of messaging system. I should mention this is second hand info, but it is well known in the industry for trading systems.

    The kind of loads big firms need to support are in the order tens of millions of users with millions of transactions a day. What I mean by transactions is buy process which can contain a dozen to a couple hundred individual orders. In other words, the number of complex insert/updates is tens of millions to hundreds of millions a day.

    For example, big firms like fedility, city group, thompson, vanguard, and schwab have millions of customers with hundred thousand plus portfolio managers. throughout a given work day, a portfolio manager may generate a couple hundred orders and submit them in one or two batches. This is done because it's cheaper for them. Can .NET scale well? Like what others say, it can if you design it right. For example, if you use MSMQ for it's designed job it works well. If you write your queues for MSMQ with plain hashtables and you don't index the messages, your chances of supporting 10K+ messages a second aren't likely. On the otherhand, if you write custom queue's, profile the messages, index them efficiently and make sure no other heavy weight stuff sits on the same box it can scale. Is that easy? No. You have to understand the problem you're trying to solve. Let's say hypothetically you have insane performance requirements like 100K+ messages a second for a messaging tier, you're better off using IBM MQSeries. Can you do the same thing with MSMQ? Sure if you build a bunch of custom stuff, write the messages to a database, index, partition and load balance. It will probably take you 8-12 months to do it, but you can with the right people and good hardware. Would you want to use XML for that messaging system? The answer is obviously no, if you want to keep the cpu and memory loads manageable.

    Many people have claimed they support thousands of transactions. Sure if all you're doing is insert into one table. Simple stuff right. Financial transactions like trading systems do a heck of alot more than a simple insert into one table. More often than not, a trade transaction with 100 orders goes into the database, affecting several tables. The middle tier then has to get events, and check the order to make sure it is valid and does not violate regulations or other compliance requirements. Sometimes it requires analytics like Tibco or what the industry calls Business Intelligence. Regardless of the server, stuff like analytics take time (seconds). Obviously if you're running complex analytics that scane 10 million rows of data with several joins in the query, you're better off using an analytics server like OLAP. Can .NET handle 1K analytics requests per second? If it's cached sure. If the nature of the data is very dynamic, like realtime trading systems, no way. doing that is very hard and most people avoid it.

    The key here is setting the expectations accurately, so your customer knows what is realistic. If you have a hard time communicating that to your customer or management, than find another job.

  • by YuppieScum ( 1096 ) on Saturday July 19, 2003 @03:54AM (#6477060) Journal
    This is not rocket science, and I had presumed this rule had been learned a long time ago... but here it is again:

    "To ensure scalability, host each server-component of an application on it's own hardware - optimsed for the specific task assigned."

    In other words, DO NOT deploy everything onto one machine. Remember the old adage "Jack of all trades, master of none".

    So, put the database server on its own box, with dual cpu, loads of memory and RAID-mirrored drives.

    Put IIS, the ASP.Net app (and the web services if you're feeling cheap) onto a fast, single cpu box, enough memory to turn off paging and a single drive - GHOST'd onto CD for backup.

    Install an extra net card in both, and set it up soley as the route for traffic between them.

    Implimenting this hardware for less than 20K should be trivial.

    If you can't comfortably support 200 concurrent users with this, you need professional help - my consulting rates are quite reasonable...

Things are not as simple as they seems at first. - Edward Thorp