×
Displays

Ask Slashdot: What's The Best Monitor For Development Work? 215

Long-time Slashdot reader AmiMoJo is having trouble finding a monitor that's good for writing code: The 16:10 aspect ratio, which allows for some extra vertical space that is extremely handy when viewing source code, is basically dead as far as I can tell. Dell still sell a few older models but there no 4k+ monitors with this aspect ratio.

Speaking of 4k, at 27" it's about the same PPI (pixels per inch) as my 2012 laptop with a 15" 1080p display, only bigger (around 160 PPI). It feels a bit awkward, not quite high enough to hide the pixels or render "perfect" looking fonts. 5k would be better (200 PPI), but every 5k monitor seems to have been discontinued except for one Iiyama model that seems to have quality problems.

Everyone seems to be obsessing over gaming and photography monitors now. Is there anything left for developers who don't care about 240Hz and calibrated colour, but do care about aspect ratio and text rendering quality?

Leave your suggestions in the comments. What's the best monitor for development work?
Hardware

Ask Slashdot: How Do You Estimate the Cost of an Algorithm Turned Into an ASIC? 97

"Another coder and I are exploring the possibility of having a video-processing algorithm written in C turned into an ASIC ("Application Specific Integrated Circuit") hardware chip that could go inside various consumer electronics devices," writes Slashdot reader dryriver. The problem? There seems to be very little good information on how much a 20Kb, or 50Kb or indeed a 150Kb algorithm written in the C language would cost to turn in an ASIC or "Custom Chip".

We've been told that "the chip-design engineering fees alone would likely start at around $500,000." We've been told "the cost per ASIC will fluctuate wildly depending on whether you are having 50,000 ASICS manufactured or 5 million ASICs manufactured." Is there some rough way to calculate from the source code size of an algorithm -- lets say 100 Kilobytes of C code, or 1000 lines of code -- a rough per-unit estimate of how much the ASIC hardware equivalent might cost to make?

Why do we need this? Because we want to pitch our video processing tech to a company that makes consumer products, and they will likely ask us, "So... how many dollars of extra cost will this new video processing chip of yours add to our existing products?"

There were some interesting suggestions on the original submission, including the possibility of C to HDL converters or a system on a chip (SoC). But are there any other good alternatives? Leave your own thoughts here in the comments.

How do you estimate the cost of an algorithm turned into an ASIC?
XBox (Games)

Ask Slashdot: Should Microsoft Make an Xbox Phone? (onmsft.com) 69

dvda247 writes: Since there's the Nintendo Switch and previously there was the Sony PSP (Playstation Portable), should Microsoft make an Xbox Phone? There are already 'gaming phones' like the ASUS ROG Phone 2, but should Microsoft jump back into the smartphone game to make a phone running Android that is focused primarily on playing Xbox One games? Xbox Game Pass and Xbox Play Anywhere would be huge selling points to make an Xbox Phone. What are your thoughts?
Windows

Slashdot Asks: Do You (Ever) Shut Down Your Computer? (onmsft.com) 304

New submitter dvda247 writes: A discussion of if people turn off their Windows 10 PCs anymore? Newer hardware and operating system changes make PCs work differently. Do you shut off your Windows 10 PC anymore? Or do you put it in sleep or hibernate mode? We are broadening the discussion to include desktop computers and laptops that are running Linux-based operating systems, or macOS, or ChromeOS. Additionally, how often do you restart your computer?
Emulation (Games)

Ask Slashdot: How Will Abandonware Work With Today's DRM Locked Games? (youtube.com) 153

dryriver writes: Thousands of charmingly old-fashioned computer and console games from the 8-bit, 16-bit, MS-DOS era are easily re-playable today in a web browser -- many Abandonware websites now feature play-in-browser emulated games. Here is a video of 101 charming old MS-DOS games, most of which can be re-played on Abandonware websites across the internet in seconds.

But what about today's cloud-linked, DRM crippled games, which won't even work without Steam/Origin/UPlay, and many of which don't even allow you to host your own multiplayer servers anymore? How will we play them 20 years from now -- on what may be Android, Linux or other OSs -- when they are tethered into the cloud? And is writing a fully-working emulator for today's complex Windows/DirectX games even feasible?

How will Abandonware work 20 years from now?

Businesses

Ask Slashdot: Is a Return To Idealism Possible In Computing? (slashdot.org) 354

dryriver writes: Almost everything in computing appears to be tainted by profits-driven decisionmaking today, from the privacy catastrophe of using personal electronic devices to forced SaaS software licensing. If Big Oil or Big Tobacco ran the computing sector, things probably wouldn't be much worse than they are today. So here's the question: Is a return to idealism possible in computing? Can we go back to the days when computing was about smart nerds building cool shit for other nerds? Or is computing so far gone now that things simply cannot get better anymore?
Software

Ask Slashdot: Do You Prefer One-Time Purchases or SaaS Subscriptions? (wikipedia.org) 356

Long-time Slashdot reader shanen remembers the days of one-time software purchases, before companies began nudging customers to a subscription-based "software as a service" model: New bugs and security vulnerabilities keep being discovered, which means the product cannot EVER be regarded as completed. Whatever the original cost, no matter what the software was supposed to do, it needs unending support. Right now I'm unable to see any other solution than SaaS!

Not limited to Microsoft, of course. Perhaps Apple was the original source of the approach...
Slashdot reader dryriver sees a dire trend: Current computing younglings may never know a future where you can actually run software locally on a PC you own, and/or not pay for it as SaaS. All perpetual software licenses may go away in the next six years. Autodesk and Adobe have already moved to SaaS-only.
But is there a case to made for ongoing payments to fund ongoing support? Or is SaaS just an exploitative business model that's bad for customers but good for software vendors? Share your own thoughts in the comments.

And do you prefer one-time purchases or SaaS subscriptions?
The Internet

Ask Slashdot: Why Do Popular Websites Have Bad UI Navigation? 235

A while back some "bored developers and designers" started uploading their ideas for the worst volume control interface in the world. But now Slashdot reader dryriver asks a more serious question: You follow a news story on CNN or BBC or FoxNews or Reuters. The frontpage of the news site changes so frequently that you wish there was a "News Timeline" UI element at the top of the page, letting you scrub back and forward in time (by hours, days, weeks, years) so you can see previous states of the frontpage and get a better sense of how the story developed over time. How many major news websites have this scrubbable Timeline UI element? Currently none do.

Or you go on Youtube. Hundreds of millions of videos for you to browse. Except that there are only 3 basic UI elements you can use -- keyword search, automated recommendations panel on the right, or a sortable list of a specific channel's uploaded videos.

- There is no visual network or node-diagram UI that would let you browse videos by association.

- There is no browsing by category (e.g. sports > soccer > amateurs > kids ) or by alphabetic order.

- There is no master index or master list of videos -- like a phonebook -- that you can call up to find videos you haven't come across yet.

And yet these UI elements are not very difficult to put in the user's hands at all. Why do websites with tens of millions of daily visitors and massive web development resources do so little to allow more sophisticated browsing for those users who desire it?

"Is there a cogent reason to restrict website navigation to 'simple, limited and dumb'," asks the original submission, "or do these websites simply not care enough or bother enough to put more sophisticated UIs into place?" Share your own thoughts in the comments.

Why do popular web sites have bad UI navigation?
Cloud

Ask Slashdot: Budget-Friendly Webcam Without a Cloud Service? 118

simpz writes: Does anyone know of a fairly inexpensive webcam that doesn't depend on a cloud service? A few years ago, you could buy a cheap webcam (with the usual pan/tilt and IR) for about $50 that was fully manageable from a web browser. Nowadays the web interfaces are limited in functionality (or non-existent), or you need a phone app that doesn't work well (maybe only working through a cloud service). I've even seen a few cheap ones that still need ActiveX to view the video in a web browser, really people!

I'd like to avoid a cloud service for privacy and to allow this to operate on the LAN with no internet connection present. Even a webcam where you can disable the cloud connection outbound would be fine and allow you to use it fully locally. I guess the issue is this has become a niche thing that the ease of a cloud service connection probably wins for most people, and other considerations don't really matter to them.

I had a brief look at a Raspberry Pi solution, but didn't see anything like a small webcam form factor (with pan/tilt etc). Alternatively, are there any third-party firmwares for commercial webcams (sort of a OpenWRT-, DD-WRT-, or LineageOS-style project for webcams) that could provide direct local access only via a web browser (and things like RTSP)?
Businesses

Ask Slashdot: Is the Pace of Tech Innovation Spontaneous Or Planned? 175

dryriver writes: People who are only mildly tech- or engineering-literate tend to think that innovation is really difficult and expensive, takes years to achieve/discover, and that when something "amazing" is actually discovered, the innovative tech is integrated into products like smartphones or PCs "as soon as is technically possible".

More tech and engineering literate people I talk to frequently tell me the exact opposite of this, namely that companies often discover an innovative method or technique in their R&D labs, are capable of packaging said method or technique into a product as soon as the next fiscal year, but choose instead to sit on the innovation for several years, bringing it to market only when competitors force them to, or sales of existing-paradigm products start to become lacklustre...at the exact point in time where the sales and profit numbers from selling the innovation are maximized.

One tech market, two very different opinions on how it actually works. So here's the question. Do we typically get innovation in products "shortly after the necessary techniques are discovered and mastered", or do we rather get cool innovation handed to us "in a planned fashion" -- at a deliberately delayed future date when the manufacturer thinks it will achieve the greatest finanical return from selling the innovation to the end user?
Businesses

Ask Slashdot: Why Does Suicide Seem To Be More Common Among Tech Workers? 274

tripleevenfall writes: At numerous points during my career in the tech industry, my workplaces have been affected by the suicide of an employee. Usually beginning with the receipt of a vague email that management has been "saddened" that someone had "passed away" recently, the truth soon becomes known and the questions begin circulating again. Why does suicide seem to be more common among tech workers? Is it due to lifestyle choices commonly associated with tech workers that lead to isolation? Are the personality types that choose tech work more prone to mental illnesses?
Android

Slashdot Asks: How Long Before Google Shuts Down Its Little -- But Expensive -- Pixel Smartphones Project? (radiofreemobile.com) 109

After years of its on and off interest in smartphones, Google today produces some of the best phones on the planet. The Pixel 3 and the 3 XL take better pictures than most smartphones -- certainly any phone that predates them. But the whole idea of Google making handsets -- being also the company that maintains Android and has relationship with hundreds of OEM partners that themselves make and sell Android handsets -- has also been peculiar. Additionally, Google itself has an alarmingly long track record of losing interest in things, including hardware projects -- and especially when they finally appear to have courted a large following. Richard Windsor, director of research firm Radio Free Mobile, adds: While the wires are already speculating on the form factor of the Google Pixel 4 due to be launched in Q4, I am wondering whether this will be the last smartphone that Google makes. Ever since it wasted $12.5bn of shareholder's money on Motorola Mobility in 2012, Google has had a bad condition of what I refer to as engineering disease (see here and here and here). I diagnose engineering disease as a condition where engineers often get so excited about whether they can develop something that they forget to ask whether they should develop that something. Engineering disease almost always ends in financial disaster and I calculate that Google's hardware business has done nothing but burn cash since the day it was created. Worst of all, I can find no logical rhyme or reason why Google needs to make hardware other than a foolhardy attempt to take on Apple.

This it will never be able to do unless it takes Android fully proprietary so that it can control the experience from end to end and it has been unable and unwilling to do this to date. Furthermore, Samsung has done a much better job at taking on Apple given its scale, brand, distribution and the fact that its core competence is to take the innovations of others and make them smaller, better and cheaper. [...] This is why I have argued that Samsung and Google should stop wasting money on each other's core competence and throw their lot in together. The problem for Google hardware is that the days of under-performing businesses hiding under the skirts of the giant search cash machine are coming to an end. We have already seen this as in March, the Pixel Slate and Pixelbook team was cut back due to the lackluster sales of the product. The three versions of the Google Pixel have sold in paltry volumes with market share never reliably exceeding 0.3% with 4.5m units sold in 2018. Given the low volume, I would estimate the gross margin of this product is around 20% in the best instance which after product development costs and marketing leaves very little if anything left over.

This is not the kind of performance that Google is used to which combined with an apparent inability to really get the hardware right (see here) means that Dr. Ruth Porat (CFO of Alphabet) will be asking some very hard questions of this division this year. Consequently, I think that Google needs to see a significant step up in performance with the Pixel 4, otherwise, it too may fall under the surgeon's knife. [...] The time to pull the stops out is now as failure is likely to result in there being no Pixel 5.
How long do you think Google would keep funding the Pixel phones project?
ISS

Ask Slashdot: Should the ISS Go Commercial? (npr.org) 193

Slashdot reader stevent1965 writes: The costs of running the International Space Station are a burden for NASA's budget. It has cost over $100 billion to construct and annual operating expenses run between $3 and $4 billion per year, representing a substantial percentage [about half] of NASA's manned space exploration budget. What to do, what to do?

A potential solution is to turn over operations (if not ownership) to private enterprise (Elon, are you listening?) Commercialization of space exploration may be anathema to some, but there is ample precedent for the government ceding control of publicly-funded endeavors to private enterprises. The Internet is the obvious example.

Why not give corporations control of the ISS? Are there drawbacks? Benefits? Which will prevail? Let's hear your opinions.

Sunday NPR noted that a few weeks ago NASA held a press event at Nasdaq's MarketSite to announce and promote "the commercialization of low Earth orbit," with astronaut Christina Koch beaming down a video from space to say that the crew was "so excited" to be a part of NASA "as our home and laboratory in space transitions into being accessible to expanded commercial and marketing opportunities" (as well as to "private astronauts.")

But there are big logistical and financial hurdles. (Even NASA admits to NPR that revenue-generating opportunities first "need to be cultivated by the creative and entrepreneurial private sector.") So leave your own best thoughts in the comments -- the how, why, what if, or why not.

Should the International Space Station go commercial?
Operating Systems

Ask Slashdot: Should All OSs Ship With a Programming Language Built In? 307

dryriver writes: If anybody remembers the good old Commodore 64, one thing stood out about this once popular 8-bit computer -- as soon as you turned it on, you could type in BASIC (Beginner's All-purpose Symbolic Instruction Code) and run it. You didn't have to install a programming language, an IDE and all that jazz. You could simply start punching code in, and the C64 would execute it. Now that we live in a time where coding is even more important and bankable than it was back in the 1980s, shouldn't operating systems like Windows 10 or Android also come with precisely this kind of feature? An easy-to-learn programming language like the old BASIC that greets you right after you boot up the computer, and gives you unfettered access to all of the computer's hardware and capabilities, just like was possible on the C64 decades ago? Everybody talks about "getting more people to learn coding" these days. Well, why not go the old C64 route and have modern OSs boot you straight into a usable, yet powerful, coding environment? Why shouldn't my Android phone or tablet come out of its box with a CLI BASIC prompt I can type code into right after I buy it from a store?
Cloud

Ask Slashdot: Is Dockerization a Fad? 252

Long-time Slashdot reader Qbertino is your typical Linux/Apache/MySQL/PHP (LAMP) developer, and writes that "in recent years Docker has been the hottest thing since sliced bread." You are expected to "dockerize" your setups and be able to launch a whole string of processes to boot up various containers with databases and your primary PHP monolith with the launch of a single script. All fine and dandy this far.

However, I can't shake the notion that much of this -- especially in the context of LAMP -- seems overkill. If Apache, MariaDB/MySQL and PHP are running, getting your project or multiple projects to run is trivial. The benefits of having Docker seem negilible, especially having each project lug its own setup along. Yes, you can have your entire compiler and Continuous Integration stack with SASS, Gulp, Babel, Webpack and whatnot in one neat bundle, but that doesn't seem to dimish the usual problems with the recent bloat in frontend tooling, to the contrary....

But shouldn't tooling be standardised anyway? And shouldn't Docker then just be an option, who couldn't be bothered to have (L)AMP on their bare metal? I'm still skeptical of this Dockerization fad. I get it makes sense if you need to scale microsevices easy and fast in production, but for 'traditional' development and traditional setups, it just doesn't seem to fit all that well.

What are your experiences with using Docker in a development environment? Is Dockerization a fad or something really useful? And should I put up with the effort to make Docker a standard for my development and deployment setups?

The original submission ends with "Educated Slashdot opinions requested." So leave your best answers in the comments.

Is Dockerization a fad?

Slashdot Top Deals