Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Microsoft

Can .NET Really Scale? 653

swordfish asks: "Does anyone have first hand experience with scaling .NET to support 100+ concurrent requests on a decent 2-4 CPU box with web services? I'm not talking a cluster of 10 dual CPU systems, but a single system. the obvious answer is 'buy more systems', but what if your customer says I only have 20K budgeted for the year. No matter what Slashdot readers say about buying more boxes, try telling that to your client, who can't afford anything more. I'm sure some of you will think, 'what are you smoking?' But the reality of current economics means 50K on a server for small companies is a huge investment. One could argue 5 cheap systems for 3K each could support that kind of load, but I haven't seen it, so inquiring minds want to know!"

"Ok, I've heard from different people as to whether or not .NET scales well and I've been working with it for the last 7 months. So far from what I can tell it's very tough to scale for a couple of different reasons.

  1. currently there isn't a mature messaging server and MSMQ is not appropriate for high load messaging platform.
  2. SOAP is too damn heavy weight to scale well beyond 60 concurrent requests for a single CPU 3ghz system.
  3. SQL Server doesn't support C# triggers or a way to embed C# applications within the database
  4. The through put of SQL Server is still around 200 concurrent requests for a single or dual CPU box. I've read the posts about Transaction Processing Council, but get real, who can afford to spend 6 million on a 64 CPU box?
  5. the clients we target are small-ish, so they can't spend more than 30-50K on a server. so where does that leave you in terms of scalability
  6. I've been been running benchmarks with dynamic code that does quite a bit of reflection and the performance doesn't impress me.
  7. I've also compared the performance of a static ASP/HTML page to webservice page and the throughput goes from 150-200 to about 10-20 on a 2.4-2.6Ghz system
  8. to get good through put with SQL Server you have to use async calls, but what if you have to do sync calls? From what I've seen the performance isn't great (it's ok) and I don't like the idea of setting up partitions. Sure, you can put mirrored raid on all the DB servers, but that doesn't help me if a partition goes down and the data is no longer available.
  9. I asked a MS SQL Server DBA about real-time replication across multiple servers and his remark was "it doesn't work, don't use it."
This discussion has been archived. No new comments can be posted.

Can .NET Really Scale?

Comments Filter:
  • by corebreech ( 469871 ) on Friday July 18, 2003 @06:47PM (#6474960) Journal
    If they're that strapped for cash they should be looking at open source.
  • by christoofar ( 451967 ) on Friday July 18, 2003 @06:49PM (#6474977)
    ... but Unix/Java programmers aren't. Wanting to write the code for free, too?
  • Solution (Score:3, Insightful)

    by Synithium ( 515777 ) on Friday July 18, 2003 @06:51PM (#6474988)
    Apache, FreeBSD and a cluster of 10 or so $1k servers and a nice DB server running PostgreSQL.

    Works for me.
  • by Anonymous Coward on Friday July 18, 2003 @06:54PM (#6475013)
    It's a damn simple question: can .NET really scale?

    Why on earth did you bring open source into it? If the man wanted to know about Linux & BSD, he would've asked.

    If you don't have any experience with the scalability of .NET, I advise you to keep your mouth shut. The signal/noise ratio is bad enough already.
  • well... (Score:4, Insightful)

    by confusion ( 14388 ) on Friday July 18, 2003 @06:56PM (#6475025) Homepage
    My first inclination is to recommend throwing that $20k at an ASP that can provide the server infrastructure to give you support for 100 concurrent connections.

    Barring that, my recommendation would be to split the web front end and database, spending about $10k on each (using dell or hpq). I can almost gaurantee that you aren't going to get 100 concurrent connections for less that $80k to $100k without doing some sort of load distribution. If you strip down the amount of dynamic content and say script a refresh of a static page, you might be able to do it, but we don't really know what the app is going to be doing.

    Jerry
  • by KrispyKringle ( 672903 ) on Friday July 18, 2003 @06:57PM (#6475028)
    Based on what?

    A) This consultant, it sounds like, is largely or exclusively MS. He's not going to suggest Open Source software to his client because that will mean a loss in business. You can hardly blame him; you gotta go with what you know.

    B) Oftentimes a commercial solution to some problems exists where a free one does not. The cost of development and maintanance means that the balance is not strictly in terms of free and non-free; after all, your developers' time costs quite a bit as well and home-grown or open source solutions may need more time taken in administration.

    This is a pretty complex issue; different analyses have been done with different results. I myself am partial to Open Source, but this does not mean that the obvious answer is, "Hey, go Open Source! It's free!" Get real.

  • by valkraider ( 611225 ) on Friday July 18, 2003 @06:57PM (#6475030) Journal
    I don't really know an answer but I will throw in my tidbit.

    But first let me apologize for all the nutheads who say "drop MS - use Linux" and all the derivitives thereof. That doesn't help anyone, and doesn't answer the question. Might as well say "use a dustmop, works great on my floors!".

    My advice would be to *try* and use a cluster of some sort instead of the one server approach. Sure, you can get some great big reliable iron - that is wicked fast... But what I have found is that scaling really needs more *bandwidth*. Not network bandwidth but memory, disk, I/O, that sort of bandwidth. Of course, the more machines - the more licenses... Good luck!
  • by SamBeckett ( 96685 ) on Friday July 18, 2003 @07:00PM (#6475065)
    This entire story is lacking units.. I am so confused, it is like this...

    "I bought a 400 car from my dealer, who said it could go 0-1200 in 57, but I talked to an auto mechanic and he said that the rpm throttled at 4.5 billion, so I don't know if I should get a turbo charger which would at least boost the speed to 1295!!"

    If you are talking about 100 concurrent request per second: Any DB worth its salt should handle that IFF the database queries aren't too complex. If they are, your schemas suck. This is doubly true on a 3 GHz machine.
  • by Brento ( 26177 ) * <brento.brentozar@com> on Friday July 18, 2003 @07:01PM (#6475077) Homepage
    2. SOAP is too damn heavy weight to scale well beyond 60 concurrent requests for a single CPU 3ghz system.

    It doesn't sound like you're talking about .NET specifically, but just SOAP in general. Make sure you separate out the platform from the product. Saying web services with SOAP won't work is a long way away from saying .NET doesn't scale.

    3. SQL Server doesn't support C# triggers or a way to embed C# applications within the database

    Embedding applications in the database violates basic scaling principals: you need to separate out into n-tier, right? You don't want the database server doing anything but serving databases. Now, having said that, Yukon (the next version of MS SQL) will indeed let you do certain things in the database with .NET languages, but that's rarely going to be a way to make your system run faster and scale more. Plus, I'm confused - what's your alternative? What database are you going to recommend that allows you to embed C# (C++, whatever) programs in the database itself?

    9. I asked a MS SQL Server DBA about real-time replication across multiple servers and his remark was "it doesn't work, don't use it."

    Sounds like it's time to get a more informed consultant who can demonstrate failure or success beyond a throwaway line. I'm not saying replication does or doesn't work, but you can't base your enterprise plans on a single line from a single guy - let alone strangers like me on Slashdot. Furthermore, this isn't a .NET question, it's an SQL question.

    It's easy to make big decisions if you break them up into a series of smaller ones. Look at each of your questions and decide if it pertains to .NET, or just a particular product. You might go with .NET and not use MS SQL Server, for that matter.
  • by ThatDamnMurphyGuy ( 109869 ) on Friday July 18, 2003 @07:03PM (#6475101) Homepage
    People in SMALL business do not want a system which requires them to hire someone to constantly keep tabs on it.

    What?#$#@ I don't care who this "SMALL" business may be, but if you put a server on the internet, and plan on not having someone to "keep tabs on it", please, get off of the f-ing internet. It's that type of mentality that yields the servers out there that STILL are spreading Code Red and Nimbda, because nobody has kept tabs on these infected servers in years.
  • by mmurphy000 ( 556983 ) on Friday July 18, 2003 @07:06PM (#6475134)
    You're bound to get lots of responses of how to scale the system up. I'll focus on scaling the requirements down.

    Unless the transactions are really long, "100+ concurrent requests" as a sustained rate is a lot of activity for a small business. So, that begs questions:

    -- What percentage of these Web service requests are read-only "query" style, and can you use application-aware caching to return results out of RAM instead of having to hit disk for each one?

    -- What is the client to this application, and can there be ways to help induce a smoother load from them (e.g., discount rates if the application is used in off hours or on weekends)? Or is the 100+ concurrent requests going on 24x7?

    -- Do all the requests have to be filled by the server, or can you blend in some P2P concepts so the clients can absorb some of the load?

    -- Can you increase the amount of data handled per transaction (perhaps by switching to document-style SOAP or REST instead of RPC-style SOAP) and thereby reduce the number of requests and excessive message parsing and marshalling?

    There's probably a bunch other things to do as well, but those came to mind off the top of my head.
  • by autopr0n ( 534291 ) on Friday July 18, 2003 @07:07PM (#6475147) Homepage Journal
    A) This consultant, it sounds like, is largely or exclusively MS. He's not going to suggest Open Source software to his client because that will mean a loss in business. You can hardly blame him; you gotta go with what you know.

    Sure you can. If he's not smart enough to figure out how to do what what these people want using the Microsoft 'suit' of software (windows, sql server, ASP) or the OSS one (Linux, apache, PHP, whatever), or the Java one (some servlet engine, jsp, etc) or whatever he really doesn't deserve the contract, IMO. That stuff isn't that hard to figure out.

    The amount of money they'd save using OSS would be enough to buy at least one more whole box (SQL server ain't cheap)
  • by Atzanteol ( 99067 ) on Friday July 18, 2003 @07:09PM (#6475161) Homepage
    If this guy is a consultant, sometimes clients have specifications for what type of hardware/software is used. Especially if their own IT group will be maintaining the systems.
  • by binaryDigit ( 557647 ) on Friday July 18, 2003 @07:10PM (#6475172)
    You don't really describe the kind of apps you will be running to know if your observations matter in the slightest. You say that you get poor performance when your app does a lot of reflection, why is it doing reflection? Is this really a need, or are you just doing it "because you can"? Are you using this app when you further state that your performance drops by a factor of 10 vs static html? Why would you be comparing the two anyway? If you're serving static pages you shouldn't be looking at a webservice anyway, so no real sense comparing the two.

    You mentioned db issues, what type of access are you doing with your databases? Are you thinking replication to deal with scaling across a server farm? Is this data being constantly updated by the servers, or is it mainly static? If you have simple primarily read only data, then something like mysql would be a far better choice, you just don't need the overhead of a full blown db server (like sqlserver, or oracle or even postgres).

    Really what you need is to identify what your requirements are and tailor the end result to the systems that best meet those requirements. This also includes support and things like backups (e.g. can the db you choose do online backups if that's a requirement, etc).
  • by AndersDahlberg ( 525342 ) on Friday July 18, 2003 @07:12PM (#6475194) Homepage
    1, Buy *a lot* of memory for the box
    2, Cache as much as you can of the dynamic content
    3, try to stay away from bloated protocols

    1: Java, .NET is the same but different - they both require a hefty amount of ram to operate at best performance (and atleast java just gets better the more memory that is available on the server ;)

    2: Maybe doesn't help much with scalability, performance will go up though - and maybe you might get good enough scalability too. Database access is always slower than a hashmap lookup (if said hashmap can stay in ram ofcourse)

    3: Web-services etc etc are maybe good in theory but at the moment those technologies are a duck in a pond when it comes to scalability and performance. Use a highperformance .net remoting implementation instead - you can probably find a few with a quick google search (IIOP comes to mind, good way to make future interfacing with other technologies available just a easy as with webservices/soap and gaining better performance in the bargain).

    Also investigate how much you can make your site use asynchronous notifications, more is better - even if ms messaging client is too bad, you can write your own asynchronous "protocol".
  • Re:Java is a DOG (Score:5, Insightful)

    by the eric conspiracy ( 20178 ) on Friday July 18, 2003 @07:12PM (#6475200)
    Example configuration is a Windows 2000 box with dual Xeons and 2GB of RAM

    I wrote and administer a J2EE application that supports online rebate offers for a very large company. We have over 350,000 registered users and typically 500 simultaneous sessions on a dual 1 GHz PIII Linux box with MS SQL Server on a similar dual CPU W2K box for the database.

    Whatever you are doing with your application (probably misapplication of EJB) is wrong.

  • by rcw-home ( 122017 ) on Friday July 18, 2003 @07:13PM (#6475206)
    ...the overhead of the framework for your code contributes only a small percentage to the total system load.

    In other words, it's not what you're using to do it, it's how you're doing it. If you're just pumping out files to clients on modems, 100+ concurrent requests isn't much. If those requests are all CPU-bound, I hope they're all niced or set to a low priority, otherwise you won't be able to log into the machine in a reasonable amount of time. If it's 100+ concurrent connections, but those connections aren't necessarily waiting for a response (just idle until the user does something) then you might not even care.

    How many whatevers you have must always be qualified by knowledge of what those whatevers are doing. Otherwise your whatevers won't fit in your $20k thingamajig. And then Mr. Bigglesworth gets upset.

    Of course, whether .NET is a properly-implemented system is a separate debate...

  • by nvrrobx ( 71970 ) on Friday July 18, 2003 @07:14PM (#6475215) Homepage
    Argh, I hate to give up moderation rights but I have to chime in here.

    A small business CANNOT afford to employ a full time UNIX administrator. Open source solutions just do not have the ease of administration of the Windows GUIs. Until they do, they will not be small business friendly. Windows Small Business Server provides you with one installer that will basically set you up completely (Exchange Server and all).

    Now, before you flame me out for being pro-Microsoft, you should know that almost all my machines at home run Gentoo Linux, and I prefer to use Linux myself.

    I had a long discussion with a good friend who is not terribly computer literate. Linux drives him _crazy_ because he can't just, "point, click and go" as he said it. Until these issues are resolved, we won't see small organizations without dedicated IT staff rolling out Linux installs.
  • by the eric conspiracy ( 20178 ) on Friday July 18, 2003 @07:18PM (#6475239)
    newcomers are really, really cheap!

    LOL. Newcomers are the most expensive programmers there are because they draw a salary, but don't write usable code.

  • MS SQL replication (Score:3, Insightful)

    by duckworth ( 71247 ) on Friday July 18, 2003 @07:20PM (#6475264)
    "I asked a MS SQL Server DBA about real-time replication across multiple servers and his remark was "it doesn't work, don't use it."

    We are running transactional replication on several large databases (6-14 GB) on a Media Metrix top 50 website with no problems. It needs to be set correctly (batch size, timeouts, etc) but it does work quite nicely. The DB machine is heavy hardware, but it it able to keep up with 12-15 front end webservers, all with applications hitting the DB.
  • by Anonymous Coward on Friday July 18, 2003 @07:20PM (#6475266)
    "IT group"? In a company whose total budget for a new machine running a mission-critical service is $50k?
  • Proper choices (Score:4, Insightful)

    by Godeke ( 32895 ) * on Friday July 18, 2003 @07:21PM (#6475272)
    I find it funny to watch the war between the "why are you suggesting open source crowd" and the "open source is the only way". I have built IIS/ASP/SQL server solutions and I have built Apache/PHP/PostgreSQL solutions. There is a place and time for both solutions.

    As an aside, I have to say that I have avoided .NET so far due to the heavy memory footprint it places on a system. Yes, VB.NET is faster than VBScript, but if you were using compiled COM objects in the first place, .NET costs more memory for a slower system. (I do think that .NET's ability to do in place object updates rocks, but I hope you have a devolpment server for bouncing and PLAN your updates...)

    But more to the point, your customers don't seem to have the budget to succeed in any domain. If you can't afford more than 20K for a machine and licenses, surely you can't afford to pay the programmers an adequate salary either. So does that mean open source? Heck no... you still have to pay the programmers! I don't think I have *ever* seen a project where the programmers were *cheaper* than the hardware.
  • some advice (Score:2, Insightful)

    by linuxislandsucks ( 461335 ) on Friday July 18, 2003 @07:22PM (#6475282) Homepage Journal
    Google regularly handles way beyond your transaction requirements why not look back in slashdot for the coverage of how google does this?

    Some hints:

    1. Google builds its own servers...

    2. Google then chooses the best OS DB combination..

  • by bertnewton ( 686123 ) on Friday July 18, 2003 @07:28PM (#6475304)

    I am the network admin at a large .Net website (5+ million unique visitors each month) and we often handle hundreds of tens of simultaneous requests. The entire site runs on 6 webservers and two database servers that run at less than 50% capacity during peak times.

    If you can't scale above 100 connections on a 3GHz system then you are doing something wrong. Check your code, check your databases.

    Your question is about as useful as "I have a piece of string that is not long enough, what can I use instead that is longer?"

  • by meme_police ( 645420 ) on Friday July 18, 2003 @07:29PM (#6475314)
    Come on. Small businesses don't need to employ a full time UNIX administrator if the consultant does his project and training right. If the consultant has Windows experience then he should provide a Windows solution, if the consultant has open source experience then he should provide a Windows solution. Once the complexity for the user moves beyond a simple click or two then the training issues are going to be the same whether it's Windows or UNIX, GUI or CLI.

    And since he's talking about web services I would think he would be providing a web administration interface. If something breaks on the backend it's going to take a consultant to fix things whether it's Windows or UNIX.

    I agree with the one poster that if this guy has low budget clients then he needs to be reducing costs in software so he can spec better hardware. If that software is open source then he needs to start learning open source stuff or find richer clients.

  • by zulux ( 112259 ) on Friday July 18, 2003 @07:34PM (#6475343) Homepage Journal
    A small business CANNOT afford to employ a full time UNIX administrator.

    They can't affor NOT to: We service many small compaines who use Windows desktops connected to UNIX (OpenBSD firewalls, FreeBSD servers). The savings in time alone are staggering:

    Real example:
    One office of ten accountants has been managed by me lasst year for under $3000.
    They have offsite backups, a PostgreSQL databe, Samba file serving, 56K nat, Firewall, email filtering.

    If (and its a BIG if) one of the servers has a problem - I can remotly fix it over my cell phone connection, and I don't have to charge them travel time. If it was Windows - I'd have to drive there.

    Windows is expensive because it requires full time baby-sitting. UNIX, once deployes is usuall fire and forget.
  • by digidave ( 259925 ) on Friday July 18, 2003 @07:39PM (#6475375)
    A common misconception is that anybody can administer an MS server, but the truth is that it's not a whole lot easier to do than administer a Unix box. What's scary is that it looks easier and most IT managers think it's easier. That's why most Windows admins are grossly incompetent, especially when it comes to security.

    A good Windows admin costs the same as a good Unix admin.
  • by Alioth ( 221270 ) <no@spam> on Friday July 18, 2003 @07:39PM (#6475376) Journal
    I had a long discussion with a good friend who is not terribly computer literate. Linux drives him _crazy_ because he can't just, "point, click and go" as he said it.

    Windows systems need an administrator every bit as clueful as a UNIX sysadmin if they are to have any reliability at all. If the Windows 'sysadmin' has to be able to point-click-go to be able to function, in all probability the Windows system will be unreliable and insecure.

    It is a false economy to think that "It's Windows. I can hire a junior reboot monkey to admin the system" - a Windows system really does require a sysadmin every bit as competent, skilled and clueful as a Unix system. A Windows system can be very reliable with a clueful admin - but it *needs* a clueful admin. Companies are shooting themselves in the foot if they think otherwise.
  • Re:Why one server? (Score:5, Insightful)

    by etcshadow ( 579275 ) on Friday July 18, 2003 @07:40PM (#6475386)
    There are actually lots of reasons. Not to say that in all cases you *should* go with a big server instead of a bunch of little weeny-boxen... but the point is that "bigger server" doesn't equal "bad". Here's a few reasons:

    For one, there's reliability:

    -first of all, the more expensive systems have more internal redundancy, which is a good thing (sucks to hamstring even a cheap $1000 machine because the $5 cpu-fan dies, let alone a $3000 middle-of-the line machine because a $50 power-supply dies... or the $5 fan inside the $50 power-supply).

    -if p(c) is the probability of a cheap machine crashing, and p(e) is the probability of a single expensive machine (your entire system) crashing, and you require all N of your cheap computers to be running in order to consitute an "up" system... then your overall system crash probability (p*) is:

    p*(c) = 1-(1-p(c))^N

    vs.

    p*(e) = p(e)

    so, by buying more, cheaper servers, you're increasing your crash-likelihood, by both increasing p(c) and increasing N (unless you buy additional cheap servers to failover to... but then you have to manage and support failover which is additional $$$ as well in terms of buying/developing/implementing more advanced systems and taking on a higher administration overhead).

    Not all systems are distributable, and those that are are often more complicated and/or expensive (but not always).

    There's also administration cost:

    -Obviously its easier to manage one box than 10 (or easier to manage 5 boxes than a hundred). Not to say that there aren't nice tools for mass-administration... but it is still more work, and anyone who says different is selling something (and something you want to think twice about before buying).

    There's ancillary costs:

    -hey! if you have ten boxes talking to each other to comprise one "system", then you need a network connecting them! That's another fast switch... and again, because you don't want to lose an expensive "system" because of a failure of one cheap part, you need to buy an expensive switch.

    -power costs money, believe it or not.

    -so does rack-space.

    -so do IPs... unless you're gonna NAT your little cluster, in which case you need to set up a NATing router for them... and that's another single point of failure unless you wanna shell out $$$ of one form another (again: buy/develop/implement).

    -you're probably gonna need some sort of KVM switch.

    I could go on, but I don't want to. Anyway, the point is that it is more complicated than many of the lot in this particular audience are likely to make out. It is often still the best route (and increasingly so!), but you can't just say that the answer is *always* to buy more, cheaper machines. There are many things to consider.
  • bad analogy. (Score:2, Insightful)

    by twitter ( 104583 ) on Friday July 18, 2003 @07:41PM (#6475392) Homepage Journal
    It's more like people who don't know what they are talking about have purchased equipment that could do the job, if only they would not insist on ASP and other Microshit. It's more like someone bought a nice deisel pickup truck to haul manure, but isists on using model airplane fuel to make it go. Our hero is asking, "how can I make this alcohol based fluid act like deisel? I know that it would be silly to try to move all that manure with 20,000 model airplanes and my client really does not have that kind of money. Someone tell me it's going to work." It's funny to read astrotufers like this, recomend a fleet of 20,000 model airplanes. [slashdot.org] It'll be fast!
  • by j3110 ( 193209 ) <samterrell&gmail,com> on Friday July 18, 2003 @07:43PM (#6475399) Homepage
    Actually, we supply a lot of small businesses in our area with whatever tech support they need. Kind of an outsourced IT staff. Paying us to fix things is as cheap as paying an MSCE monkey to spend 8 hours to fix a 5 minute job. We support OSS, so they save on licensing too. We even have a software team to make custom software, then release it open source.

    The point is, they should be looking for the right service. You don't need dedicated staff with open source software. We get a call maybe once a month about an OSS product gone bad (usually something silly that can be fixed in 5 minutes if you know what you are doing), and we ssh in and fix it. We get calls about MS products and idiots that don't turn on things before they want to use them from 8AM till close every day. I'm pretty sure that most of our clients have spent more money on MS related tech support than OSS related tech support. I can calculate right now that the TCO for a pirated MS product would still be greater than a OS product by a significant factor. The speed at which MS products have to be fixed/patched is very much greater than a properly configured Linux system, and you're paying for that hell to boot.

    If you want to shoot yourself in the foot by jumping on the .Net train before you can see where the tracks are going, then you go ahead. As for me, I plan to use as much cross-platform programming (mostly Java because the GUI is the same everywhere) and free/open source software that I possibly can, mostly because the products I use like JBoss (Free J2EE), Samba, MySQL/PostgreSQL/SAP/Firebird, etc. are more stable than .Net, Windows, MSSQL, etc.

    Before those of you that say the SQL Server is actually good start flaming me, that's where a lot of headaches come from. SQL Server drops records and corrupts more than MySQL before transaction support. (There, now I'll get flames from both ends.) Also consider the price you are paying. (Per connection last time I checked.) Spend more money on the hardware and get RAID-1 on good disks and a good UPS, and you will have a faster, more reliable RDBMS.
  • by 4of12 ( 97621 ) on Friday July 18, 2003 @07:45PM (#6475407) Homepage Journal

    telling those companies they don't just have to buy

    TANSTAAFL.

    No matter what you'll have to layout cash to buy the three essential ingredients:

    1. hardware
    2. software
    3. people to support and maintain the hardware and software

    Microsoft marketing would have you believe that their software solves all your problems and that lots of cheaply available people can do the job. They'll still charge you for their software and you'll find out that hardware still costs something and that getting good people to support and maintain your software and hardware is more expensive, but worth it.

    Linux advocates will tell you that the software costs zero and that any competent sysadmin can do the job. You'll find out you still have to buy reasonable hardware. And you'll find out that getting good poeple to maintain and support your hw and sw costs more, but is worth it.

    Any way you go you're gonna pay.

  • by sootman ( 158191 ) on Friday July 18, 2003 @07:51PM (#6475444) Homepage Journal
    Sorry, I've got to go with the poster who says "If you don't have time to take care of your box, get the fuck off the Internet." I run a Linux/Apache site and my logs are full of requests for "default.ida?XXXXXXX..." and other viruses that came out (and were fixed) *years* ago. With UNIX, you pay a bit more in the beginning and then you hardly need to touch the box. Anything that needs to be done, a competent admin can do with nothing more than SSH. As opposed to MS boxes that just sit around, get owned, and fuck up everything. Sorry, but you can not have security and ease of use and low cost and easy to use all at once. Security is *not* fire-and-forget. Security is ongoing *work*. Work: not fun and not easy. You can't have your cake and eat it too. learn your way around, or pay an admin. Otherwise, someday you'll get owned and you'll become one more idiot contributing requests for 'default.ida' and 'root.exe' to my Apache logs.

    And I'm sick of this attitude that always seems to come from SB owners, like they are *owed* something and *exempt* from working just because they're a small business. What would we do if they said "I don't have the time or money to learn the rules of the road or how to care for an automobile, I just want to blast down the road at 130 mph, trailing a could of oily smoke, because I'm a SMALL BUSINESS OWNER and I'm in a hurry, dammit!" Would we allow that kind of behavior? HELL NO. I'm sorry, it costs time and money. ACCEPT IT.
  • by dabootsie ( 590376 ) on Friday July 18, 2003 @07:51PM (#6475446)
    It's a damn simple question: can .NET really scale?

    That really isn't the question being asked at all.

    This person doesn't want to know if .NET will provide a relatively non-diminishing gain in performance as more capacity is added, which would be scaling.
    They actually want to know if it will handle a large number of concurrent connections to services on small hardware.

    The real question is:
    Will it handle a lot of clients at once on very little hardware?

    The answer is: No.

    If you don't have enough capital to invest in the infrastructure you need, you have to either find something that will do what you want with less, or give up on the whole idea.
  • by dabootsie ( 590376 ) on Friday July 18, 2003 @08:00PM (#6475499)
    Right, and those specifications sometimes push a project outside the realm of possibility, as seems to have happened here.
    Either work with the client to get the specs changed to something feasible (which consumes time you can't bill for), or pass on the job and look for another client. Them's the breaks.
  • by KrispyKringle ( 672903 ) on Friday July 18, 2003 @08:17PM (#6475587)
    I think you're missing the point. First, nobody said he wasn't smart enough. He was just comparing options. My point was that the comparison is worth making; there is no valid way to say, "OSS is always better and cheaper."

    Furthermore, and I don't know much about .NET, he was also looking for an SQL backend. You mention "Linux, apache, PHP, whatever" and "some servlet engine, jsp, etc" without seeming to really understand a couple of crucial points: the "Java one" would still need an OS and webserver, and all three still need a database server. Really fancy, high-volume DB servers such as Oracle cost a lot. So then we end up comparing, say, MySQL, MSSQL, mSQL, and PostgreSQL? Or Perl, PHP, ASP, and JSP/servelets? I'm sure I'll get flamed by zealots, but those aren't always easy comparisons.

    Write it off as ignorance if you like. It doesn't sound like you're a professional in this field. But so what if he is ignorant? That was my point; if he is best with MS, it's not going to be profitable for him or his client for him to be mucking about with Unix instead.

    As for the amount of money you'd save, well, I already commented on that. Sometimes the figures aren't necessarily what they may appear to be; the initial layout is certainly greater with commercialware, but support, time spent on maintainance and deployment, and so forth, is sometimes a lot less.

  • by SlashChick ( 544252 ) <erica@noSpam.erica.biz> on Friday July 18, 2003 @08:25PM (#6475632) Homepage Journal
    Install an SSH server on Windows and you'll have much of the same functionality as UNIX through the command line.

    " With UNIX I'm in Ireland (I'm usually based in the US) and I get a call 'We just got a new user, could you add them'. I whip out my Ericcson 68i and Sharp Zaurus - and ssh into the server and run a script to add the user."

    Did you even bother to check out whether this was possible in Windows? I guess not: this site [windows2000faq.com] shows you how to add a user from the command line in Windows. In fact, you could even write a script to do that (batch files... remember those?) In fact, here are lots of handy other things you can do from the command line in Windows [labmice.net], including changing user passwords, forcing users to log off, and more.

    Once again, ignorance of what Windows can do is no excuse. I administer 16 Linux boxes... I'm not anti-Linux by any stretch of the imagination, and I know that there are lots of situations where Linux is the better choice. But that still doesn't mean I'm ignorant about what Windows can and can't do.
  • While I don't disagree with you, comments like this make me sad. It's too bad that Internet publishing has become an experts-only club. Much of the early optimism about the Internet (especially the web) centered around empowering ordinary people to get their message out without having to own a printing press.
  • Re:What?! (Score:2, Insightful)

    by zulux ( 112259 ) on Friday July 18, 2003 @08:35PM (#6475683) Homepage Journal
    Why can't you just use Activestate Perl to hit a few Win32 API calls to do the job? Connect to the machine, whack the user database around with some custom programming, and then you're done.


    Great idea, if you have to use NT.

    But if I did that for my smaler clients - I'd have to charge them an arm and a leg for each Windows Server I deployed.

    The would not like an invoice that read like this:

    Windows Solution
    Windows 2003 Server 10 CAL - $1000
    Install Windows 2003 - $300
    Make Windows Behave Like Unix - $3000

    Instead, they like this:

    FreeBSD Solution
    Install FreeBSD - $300
    Donation to FreeBSD.org - $300

    So for my smaller customers, it's not an option that makes economic sense.

    There's nothing wrong with Windows, but remote managment is VERY difficult.

    This is the important bit

    In addition, UNIX has a rich history of remote managment - there a whole books that can help me. But for Windows - wheres the "Remote Windows Management Using Activestate Perl and a few Win32 Calls for Dummies?"
  • by His name cannot be s ( 16831 ) on Friday July 18, 2003 @08:36PM (#6475684) Journal
    Holy mother of fscking god.

    STOP USING WEB SERVICES.

    #1) If you are using the [WebMethod] shit and hosting your SOAP calls via IIS you need a smack in the head.

    #2) If you are using SOAP to communicate between the layers of your application, and are not exposing the SOAP methods for external consumers of the web services, You need more smacks in the head.

    #3) If you don't know what you are doing, hire someone who does. (and by the sound of your point #6 about using reflectiona and dynamic code in the production app, you don't.)

    If you are in .NET and you *NEED* a remote facility between your layers, (And if you were working for me, you'd damn well prove it), then for the love of god, switch to Remoting. Don't know what that is? Grab a book, dumbass. You can use a binary formatter and jump your speed by an order of magnitude, or you can fall back to a SOAP formatter on remoting and still double your performance.

    If you don't *NEED* a remote facility between the layers, stop using SOAP, or any other remote procedure calling solution. Nothing pisses me off more than bandwagon jumping know-nothings using a fancy fucking hammer to solve a problem which requires far less.

    It would appear the largest problem you have in overcomming your problems with .NET is your own stupidity. No matter if you are on .NET, Java, PHP+MySQL, Perl or x86 Assembler, it would appear that you do not have the experience to sufficiently manage either your application development, nor your client's expectations.

    Bottom line: To support 100+ concurrent requests, There is no way that you shouldn't be able to do that for under 20K... (although I wonder where that number came from.. Do these servers sit in a vacuum? Who's running them?)

    From a purely acedemic standpoint, what the heck were you guys thinking when you were going to spend only 20K on the hardware for an app that does 100+ concurrent transactions. That sounds like enough business to afford quite a heck of a lot more.

    If you are/were so budget constrained, why are you spending at thousands on server software? (.NET server, SQL Server, etc...) If you are so budget constrained, you shoulda bought opensource.
  • by The Bungi ( 221687 ) <thebungi@gmail.com> on Friday July 18, 2003 @08:37PM (#6475690) Homepage
    Really, really. I won't add to the many good comments about the topic, but let me say this: if you don't know what you're doing (and from your questions I assume you don't), invest a bit of money and hire a good architect for a couple of weeks. Not only will he/she answer your questions, but will probably get you started on a good design and a decent implementation.

    I've designed infrastructure and application-level systems that use .NET and happily meet your requirements (MSMQ is not scalable? Huh?), and then some. So yes, to answer all your question, it works. But if you don't know what you're doing it's very simple to fuck it up, regardless of whether you're using Microsoft products or not.

    Coming here (!) and asking questions about whether or not a given Microsoft product is viable seems to me like a losing proposition. FWIW, most professionals that work with Microsoft technologies are far more willing to admit shortcomings in those products and suggest alternatives, something that the /. crowd seems incapable of. So at least if you hire someone in the know you won't get BS left and right.

    So get some help.

  • by zulux ( 112259 ) on Friday July 18, 2003 @08:45PM (#6475728) Homepage Journal
    The point is - you have to fight windows every step of the way to do remote management.

    1) Insall a SSH
    2) Install more crap.
    3) Hunt down obscure internet referances, becase only weirdos comandline stuff with Win32.

    Here's a test:

    In Windows:
    Make a 'batch' file that dumps a running MS SQL server into a file, zips it, names it after the system time, and emails the file. Make it happen every hour.

    In UNIX it ONE, SIMPLE, EASY TO UNDERSTAND line put in a Crontab file.

    No downloading, no searching, no crap. Just done.

  • by man1ed ( 659888 ) on Friday July 18, 2003 @08:50PM (#6475756) Homepage Journal
    This guy is trolling. From his post:
    I've found Red Hat 9 most impressive.
    ...
    The included version of Wine ...


    From the Red Hat 9 Release Notes:
    The following packages have been removed from Red Hat Linux 9:
    ...
    - wine - Developer resource constraints
  • by macshit ( 157376 ) <snogglethorpe@NOsPAM.gmail.com> on Friday July 18, 2003 @09:01PM (#6475805) Homepage
    Once again, ignorance of what Windows can do is no excuse.

    Er, I thought that was the whole point of windows -- that you could use it easily despite being kinda ignorant. If you need to rely on command-line interfaces and configuration files anyway, then why not do it properly and use linux/unix in the first place?
  • by Chromodromic ( 668389 ) on Friday July 18, 2003 @09:08PM (#6475854)
    Dude, do you read Slashdot?

    Because, off the cuff, I can think of at least five other sites, with dozens of other readily contacted individuals, that are going to give you more accurate, more informed, and more sympathetic answers than the site on the Web that publishes a depiction of Bill Gates wearing Borg gear.

    Moreover, in case you haven't noticed, the vocal readership here isn't exactly a group of Windows devotees. Whenever the new Linux kernel comes out the admins just issue an announcement that ends with "You know what to do ..."

    So unless this is a scheme to generate loads of comments designed to convince your client to implement FreeBSD instead ... wait. AHA!
  • by Anonymous Coward on Friday July 18, 2003 @09:22PM (#6475931)
    People need to realize that IT is not a "cost center" or a "profit center" it's a "business enabler".

    Keeping machines running is a form of insurance. Haven't any of these companies had to spend 10K's of dollars after a security breach? They'll start paying a few hours per month for security pretty quick after that.

    Nobody complains about the ROI of insurance do they?

    True, certain VENDORS like to take advantage of folks and charge $100,000 for a $1000 job, but let's not throw out the whole IT department because of this.
  • Re:some advice (Score:1, Insightful)

    by Anonymous Coward on Friday July 18, 2003 @09:42PM (#6476027)
    Google doesn't do any real transactions. If your search request gets lost, it's not a big deal. They use an in-RAM database, for example.

    Conclusion: Google doesn't compare to most line of business applications at all and is therefore irrelvant.
  • by 73939133 ( 676561 ) on Friday July 18, 2003 @09:56PM (#6476072)
    A) This consultant, it sounds like, is largely or exclusively MS. He's not going to suggest Open Source software to his client because that will mean a loss in business.

    That's an idiotic argument. For consultants, OSS is often at least as much of a money maker as Microsoft software. Furthermore, there are mature non-OSS alternatives (e.g., Java) available.

    B) Oftentimes a commercial solution to some problems exists where a free one does not. The cost of development and maintanance means that the balance is not strictly in terms of free and non-free; after all, your developers' time costs quite a bit as well and home-grown or open source solutions may need more time taken in administration.

    Yeah, and "oftentimes" the commercial solution actually performs less well, is less reliable, requires more hardware, and requires more administration. A lot of Microsoft products fall into that category. Products like MS SQL Server and MS Exchange are prime examples of what a money pit commercial software can be.

    Face it, people use OSS not because they save on licensing costs, but because it works better and is easier to maintain.
  • by Anonymous Coward on Friday July 18, 2003 @10:18PM (#6476150)
    The thread should end with parent's post.

    I'm serious. The answer is NO. Nothing else should be said...but here's my 2 cents:

    You want to maximize hardware for the money???

    WTF...linux/*bsd was born to do this.

    If you are posting on slashdot, I imagine you must read it "from time to time".

    Take the advice people have been giving out day after day, year after year.

    small outfits HAVE SO MUCH TO GAIN FROM OSS.

    sheesh.

    SLASHDOT HAS FAILED if people are posting questions like the one in the article.

  • by 3770 ( 560838 ) on Friday July 18, 2003 @10:49PM (#6476275) Homepage
    But you aren't exactly right either.

    You are simplifying when you say to not 'embed applications' in the DB. I will interpret 'embedding applications' in the DB as doing business logic in the database.

    Many times it is more resource efficient for the _database server_ to perform some of the business logic in the _database server_.

    It can be more efficient for the database to do some operations which results in a relatively small result set rather than pushing a lot of data up to the application server.

    The bottleneck will usually not be the CPU on the database server, it will be the disks. And the disks are better utilized when you do the manipulation inside the DB server itself.

    This breaks the separation of the business logic tier, data access layer-paradigm. Design that is easy to maintain and design that is efficient to execute don't always go hand in hand.

    I'm a pragmatist. I say, make an n-tier application. Make an object oriented design. But don't be rigid, break the rules if it suits your purposes. Hey, I even use a goto every once in a while when it makes my code faster or simpler.
  • by sean23007 ( 143364 ) on Friday July 18, 2003 @10:56PM (#6476305) Homepage Journal
    I think this article was asking for numbers and setup information, and probably a lot of other people would be interested in yours if your claim is true. Please elaborate.

    I'm not trolling; I'm curious.
  • by Earlybird ( 56426 ) <slashdot&purefiction,net> on Friday July 18, 2003 @11:33PM (#6476405) Homepage
    He only said that those 1-megabyte messages negatively affected the average, not that they could be passed with anything approaching "near-real-time" speed.
  • by RoLi ( 141856 ) on Saturday July 19, 2003 @03:32AM (#6477013)
    The other poster may be ignorant of what Windows can do, but you are ignorant of reality:

    • You have to install lots and lots of extra stuff on Windows to make it work over ssh. Installing that costs time and money.
    • Just like the other poster, nobody uses Windows over ssh because of the above point. If you have any questions you are unlikely to find the answer on newsgroups etc. because there are so few people knowing it. Of course you don't get any support from Microsoft
    • Often you don't know that you need remote access in advance. Assume you are on holyday and a problem on the server arises. - On your Windows default install, you are screwed, on your Linux default install it's no problem.

    So yes, it is possible to administer Windows over ssh, it's just a pain in the ass compared to Linux, sorry.

  • by YuppieScum ( 1096 ) on Saturday July 19, 2003 @03:54AM (#6477060) Journal
    This is not rocket science, and I had presumed this rule had been learned a long time ago... but here it is again:

    "To ensure scalability, host each server-component of an application on it's own hardware - optimsed for the specific task assigned."

    In other words, DO NOT deploy everything onto one machine. Remember the old adage "Jack of all trades, master of none".

    So, put the database server on its own box, with dual cpu, loads of memory and RAID-mirrored drives.

    Put IIS, the ASP.Net app (and the web services if you're feeling cheap) onto a fast, single cpu box, enough memory to turn off paging and a single drive - GHOST'd onto CD for backup.

    Install an extra net card in both, and set it up soley as the route for traffic between them.

    Implimenting this hardware for less than 20K should be trivial.

    If you can't comfortably support 200 concurrent users with this, you need professional help - my consulting rates are quite reasonable...
  • by spybreak ( 636509 ) on Saturday July 19, 2003 @04:44AM (#6477145)
    MSQL already has a stored procedure language - TSQL, why not use that?

    In my experience the object relational style mappings provided by for example Java Stored Procedures in Oracle is a real performance killer. Why would C# Stored Procedures would be any different?
  • by macrom ( 537566 ) <macrom75@hotmail.com> on Saturday July 19, 2003 @09:42AM (#6477739) Homepage
    Quit being an idealist and live in the real world here. Linux may be born to do whatever the hell you want it to, but that doesn't change the fact that a customer needs Windows solutions. If a client comes to you asking for help with their Windows systems and you stand there and say "use OSS instead", then you're down one client probably AT LEAST 95% of the time. Maybe a small minority will want to listen to what you have to say, but more than likely they just want to roll with what they have.

    This company doesn't have money for a new beefy server. So what makes anyone here think that this company has the money to :

    1) Take down all of their current systems and install Linux or something similar.
    2) Spend the next several months learning an operating system and related tools that the IT staff may not have experience with.
    3) Spend the time and money to get rid of all of the Microsoft technologies that they use such as Exchange/Outlook, Active Directory, IIS, etc. The TCO is more than just the price of the free software. You have to make sure that you can swap out technologies without impacting your customers or your employees.
    4) Spend the money to train the current staff and/or hire new expertise to administer the new systems.

    The guy at the top that told the parent to basically STFU is right. .NET is a real world technology that TONS of companies are moving towards. Whether you Slashbots like it or not, this is the way that many of our customers are heading. Answer this guy's question to help him out as a fellow Slashdotter or keep your religious preachings to yourself.

    To close, I want someone to respond to this post that has successfully walked into a company that was strapped for cash and wanted some Windows solutions, but then suggested using OSS instead and had the company buy into it. And I'm not talking about your brother's donut shop either, I mean a REAL customer with, say, a minimum of 100 users on a Windows network using AD, Exchange, etc. I think it's only fair to hear the success stories to give some validation to this argument.
  • by leonbrooks ( 8043 ) <SentByMSBlast-No ... .brooks.fdns.net> on Saturday July 19, 2003 @10:49AM (#6478106) Homepage
    Wow, this must be /. got a clue day.

    Your drugs must be more expensive than mine. (-:

    /. as a whole is as clueless as ever, but you did see a few good posts.

    Back on topic(-ish): as well as the low-bandwidth point the grandparent made, I think it's more germane to mention that any one of sixty-to-a-hundred failures will keep a Windows server (and hence VNC) off the air, but you only get a-handful-to-a-few-dozen chances to kill a Linux server stone dead as far as remote access is concerned.

  • by Stu Charlton ( 1311 ) on Saturday July 19, 2003 @11:16AM (#6478265) Homepage
    No one knows the real reasons behind the Orbitz debacle, other than it being attributed in the press to oracle real application clusters. RAC runs quite a few large systems without such press debacles including a crucial one for the FAA... We don't know if it was a bug, or incompetent sysadmins, or both.

    As for MySQL being "reliable", you need only look at the history of Slashdot to see that MySQL's "reliability" is, at best, a fairly recent innovation, and still open for debate.
  • by Paracelcus ( 151056 ) on Saturday July 19, 2003 @01:39PM (#6479123) Journal
    100 users?

    You're kidding right, I've ridden on a bus with more people than that!

    I was a sysadmin at a place that had 300 active user accounts on a RedHat 6.2 machine, a pentium 150 with 160 Megs of RAM and 2-4 Gig HD's.
    It provided the following services.
    SMTP
    POP3
    SSH
    Caching Name Service
    1 MySQL Database
    SMB filesharing.
    HTTP (frontend for the database).

    The machine was made from parts bought at Fry's for a couple of hundred dollars.

    Where are small companies going to get 50K for a machine when they have to lay off people to make the payroll? 50K is the salary of one productive inividual for a year!
  • by j3110 ( 193209 ) <samterrell&gmail,com> on Sunday July 20, 2003 @04:29AM (#6483073) Homepage
    The one that is capable of reverting to Paradox does indeed use the BDE (about the only thing that uses Paradox). I happen to know quite a bit about the BDE having worked on Builder/Delphi projects in the past. If the queries are done properly, it will work in either server or client mode, but you can always try the other. I loaded up the BDE administrator, and I managed to find some set of configuration that made it a little faster, but the queries ran both server and client just fine. The sad part is, I had better results executing on client side than on server side. Running select(*) on a table should never take very long, and most of the problem, that I can tell from diagnostics, was the server just halting network traffic. It may be a bug or problem with NT4 and SQL Server, but I'm sure not going to tell them they need to spend 20K to get these applications running. I'ld much rather develop a work-alike and drive the idiots that chose SQL Server in the first place out of the market if I can. Their product doesn't work for a group of people greater than 3 despite their claims. We've sank enough debugging time into that whole thing that we could already have developed an Alpha.

    It's not the rogue master browser problem, BTW. Get out any NT4 system and any operating system built on NT since, and by default, it will be hideously slow. Pull out a Windows 98 machine and put in on the same network, and it will be instant. I'm fairly certain it's intentional incompatibility that they do all the time. Running mixed iterations of MS software will cause a lot of problems. They want you to upgrade everything all the time, because that's how they squeeze more money out of you.

    I'm targetting any RDBMS with a JDBC driver, not just DB2. DB2 is just a preferred upgrade path. Even with this kind of portability, I don't take too much of a performance hit. Stored procedures wouldn't help, and neither would sub-selects. The only thing to gain from upgrading beyond MySQL for this application is good realiable data partitioning/clustering. No matter what anyone tells you, don't believe them that the current hot-backup crap in MySQL is useful for anything but failover. There are no distributed transactions the last time I checked, so I can't imagine that it would be reliable if you were using transactions on two different servers simultaneously.

    If you are using Java, and you want to have a good abstracted database access, you should take a look at the open source hibernate project. Excellent software, much better than CMP. It supports all kinds of automatic relationship management too. It has it's own abstracted query language that it compiles into native queries for whatever features your DBMS can handle. It's the best way to get performance out of RDBMS's without spending months developing special queries for each DBMS, and still have DBMS portability.

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...