Tuesday, December 28, 2010

Linux for the rest of us?

Back in the late 1990s, people were touting Linux as the “next big thing.”

Think back to 1998 when an internal memo released by Microsoft about the “Linux threat” was leaked to the public. It appeared that Linux might be on the verge of seriously challenging Microsoft’s dominance in the marketplace. At the same time, people were worried about security vulnerabilities in Windows 98 and the stability issues of the operating system.

Indeed, some of them were mad enough at Microsoft to give Linux a try. It was touted as a free operating system that had a lot of software support (which it was) but was a bit difficult for non-technical types to install and learn (that was true, too).

The end result of the whole hubbub was that Microsoft released a better operating system in Windows XP and Linux did win a few converts, but nothing on the scale that some had hoped. In short, one might argue that competition from Linux — either real or imagined — prompted Microsoft to get its act together and take a hard look at making operating systems that are both more secure and more stable.

By the way, we’re not discounting Apple’s efforts in competing with Microsoft. Mac OS X, after all, does share some things in common with Linux in that both operating systems have their roots in Unix. Rather than going into that, it’s probably sufficient to say that Apple’s OS X platform has been quite successful, partially due to its simplicity. Mere mortals can, indeed, install and use the thing.

That same claim couldn’t be made of Linux in the late 1990s or, indeed, even the early 2000s. Installing programs required some knowledge of how to navigate around in a terminal window and finding the right hardware drivers was a challenge. Linux developers, then, were faced with a challenge of their own — making the operating system more accessible was the key to finding more users.

That challenge was taken up by Ubuntu when the company issued its first Linux distribution in 2004. Ubuntu’s mission is to both make Linux easier to use and update it regularly. That version of Linux — like most of them — is free.

How is Ubunto doing? Pretty well, actually. I installed it on my netbook the other day after my hard drive developed a logic error and needed to be formatted. The thing about a netbook, of course, is that they typically don’t have DVD ROM drives installed, so you don’t get backups of the operating system that ships with them. In my case, I had Windows XP and a license number, but didn’t want to go through the hassle of getting a new OS. I needed the netbook in a hurry, so I opted for going to the Ubuntu site, downloading the latest Linux distribution, putting it on a pen drive and installing Linux on my netbook from that drive.

The installation, believe it or not, was a breeze. The Ubuntu site walks users through the entire process and tells them how to make a bootable pen drive and installing Linux to it from a computer that is working. Yes, the netbook was pretty much worthless without an operating system but my desktop still has its stout copy of Windows XP up and running, so I was able to download Linux and put all the files I needed on a pen drive.

Now, bear in mind that the folks at Ubuntu realize that some people may be apprehensive about getting rid of Windows entirely and installing Linux. No problem. The Web site will tell you how to run Linux from a CD ROM or pen drive so you can try out the operating system without risking losing Windows. It also has handy tips on how to make a “dual boot” system that can swap over to either Linux or Windows. Handy stuff.

Ah, but I needed an operating system and needed one in a hurry. I well remember how tough it was to install a Linux distribution back around 2000 and was worried about running into some of the same problems. However, the Ubuntu distribution installed on my netbook in about 10 minutes and installed all the necessary drivers. Well, almost all of them. I couldn’t get my Wi-Fi card to work with Ubuntu, but a quick Google search led me to the solution. In under an hour, then, I had an operating system that up and running. In other words, you don’t have to be a software engineer or someone with a ton of time on his hands to install this and get it to work well. Linux has come a long way, indeed.

Oh, and a lot of the programs I’ve used for years were already installed with Ubuntu. The GIMP — a great graphics editor — was already installed as was OpenOffice.org, a viable alternative to the Microsoft Office suite. The Firefox Internet browser was already installed, too, as was the Thunderbird email client. One of my favorite applications I used with Windows — Dropbox — wasn’t installed, but getting that up and running was very easy. The only program I couldn’t get that I wish was available is Evernote, a program that allows one to easily share notes from one computer to another.

Oh, here’s an update — a very nice Linux user by the name of Erik Rasmussen sent me an email telling me I could get Evernote running through WINE — a Linux program that can handle some Windows applications. I’ve got Evernote 3.1 up and running well right now. I didn’t even ask for help and someone offered assistance after reading my earlier comment about not having Evernote on my system. The Linux camp has always been stocked with enthusiastic supporters willing to help users learning the operating system. That’s appreciated.

Here’s the downside of Linux – some of those programs you have probably used with Windows for years aren’t compatible with the operating system. Yes, there are some programs that offer similar functionality, but they aren’t exactly the same. OpenOffice.org, for example, can read and write Microsoft Office documents quite well, but the compatibility isn’t perfect in some instances (that’s particularly true when it comes to Excel spreadsheets). Still, Linux does make it possible to get by without Windows well enough and that’s all that counts for some people.

One of the beauties of Ubuntu is how rarely it requires the user to dive into terminal mode (think Unix shell or “DOS ‘C’ prompt” and you get the idea). Installing software under Linux used to be a bit of a chore, but Ubuntu makes it easy. Want to find an application? No problem — just click an icon, head to the Ubuntu software store, run a search and find what you need. If that Linux program isn’t in the store, it is possible — in most cases — for the operating system to read the file and install it for you. It’s all very easy.

Here’s another thing that’s easy. Ubuntu has some pretty good user documentation available online and something even more valuable — an active forum full of people willing to help newcomers solve the problems they’re having. Again, Ubuntu puts the emphasis on ease of use and its flavor of Linux reflects that philosophy quite well.

The menus make sense, too. Applications are generally stored where they need to be, games are located where you’d expect them, the file structure is similar to what you’re used to in Windows, etc. In other words, Linux has come a long way in terms of being easy to use over the past decade. One thing that does feel odd but is pretty slick is the ability to set up independent “workspaces” in which different projects are run. For example, go ahead and work on a word processing document in OpenOffice.org in one workspace while Linux downloads and applies updates in another.

Ubuntu promises to make the system even easier with its Unity desktop. That’s standard equipment on the netbook distribution of Ubuntu and is, essentially, a toolbar that runs down the left hand side of your screen and allows you to access various menus and programs quickly. While I went for the default interface with my Ubuntu (yes, there’s a way to download and install the “desktop interface), I haven’t gotten rid of Unity. Ubuntu issues updates about every six months, so who knows what will be built into Unity in the future?

How about security and speed? Linux is pretty secure, really. Any time a program wants to access a crucial part of my system, I get a dialog asking me type in my user password and give it permission to proceed. There’s not a lot of talk about viruses with Linux, so I feel pretty good about that. As for speed, I have noticed that Linux uses fewer system resources than Windows XP did, but the difference in speed seems nominal. That could be because the system I typically used is a netbook which isn’t going to set any speed records, anyway.

I’d love to say that Linux is dandy and wonderful and that I’ll never use Windows again. However, I simply don’t know enough about it yet to make that determination. Linux does look promising so far and I was thrilled to have my computer back and running as usual with most of my familiar applications in just a few hours. However, Windows is still more familiar to most users and Windows XP and 7 are solid enough to likely cause a lot of users to wonder why they should bother switching. Microsoft is still the industry standard and the OS seems to accomplish what most people want to do with it.

Wednesday, December 22, 2010

Novell to retain Unix copyrights in merger

Novell announced on Wednesday that it would continue to be the owner of its copyrights to Unix, despite its merger with Attachmate.

The news came two days after Novell announced it would be acquired by Attachmate as part of a $2.2bn (£1.4bn) deal that also involved the sale of some of the company's intellectual property to the Microsoft-organised consortium CPTN Holdings.

"Novell will continue to own Novell's Unix copyrights following completion of the merger as a subsidiary of Attachmate," Novell's chief marketing officer, John Dragoon, said in a post on the company's website. Details have yet to be released on the specifics of the 882 patents and other intellectual property that will go to the consortium as part of the deal.

Friday, December 10, 2010

Processor technology for Unix systems: AMD or Intel?

Nearly every six months, processor technology is reinvented with new features and performance improvements. And with server advancements, IT administrators can run 10 or more virtual machines (VMs) on a single physical server. While many initial server technology developments were geared toward Windows-based VMs, in recent years, hardware releases from AMD and Intel show a changing trend toward open source systems.

Along with increasing demand for redundant and stable Unix-based physical and virtual machines, processor manufacturers have also heard cries from virtualization and migration engineers. Intel and AMD lead the market with their respective processor lines. Imagine how an eight-core processor supporting up to 1 TB of RAM or a 12-core monster with granular capabilities could better utilize underlying hardware components or enhance power utilization for an even greener server room. The intriguing part is that both manufacturers are still developing new processors at a remarkable pace for Unix systems.

Intel Xeon 7500 series power, performance

Intel has partnered with Unix-based manufacturers to create a processor technology that can drive platforms including Solaris OS. The collaboration optimizes how Solaris and the Intel microarchitecture work together on the new Intel Xeon 7500 series processor (formerly codenamed Nehalem-EX).

New Intel Xeon processors include improved hyper-threading technology and Turbo Boost Technology, which can convert any available power headroom into higher operating frequencies. When a VM requires additional processing power, Intel Xeon increases the frequency in the active core when conditions such as load, power consumption and temperature permit it. By utilizing thermal and power headroom as a performance boost, operating systems running in a virtual environment (like Solaris) can work more efficiently with less overall heat and power consumption. In fact, Intel’s power utilization testing on new Sun  machines has shown up to 54% improvement in virtualization power performance per watt.

The Intel Xeon 7500 processor also includes improved I/O virtualization and I/O performance through direct assignment of a device to a VM -- I/O resources can now be allocated to existing or newly configured VMs. With better I/O for each VM, workload performance also improves. You’ll see faster VM load times and better legacy system support.

Intel is also further enhancing its Virtualization Technology for Directed I/O performance by speeding up VM transition (entry and exit) times, meaning much faster load times for a resource-intensive VM and easier migrations between physical boxes or blade servers. By supporting 16 GB DDR3 DIMMs, the newest processors can enable up to 1 TB of storage in a four-socket server and have double the memory usually found in an eight-socket product. Features like on-chip cache, RAM and additional memory bandwidth help improve overall system performance. Intel has also placed four memory channels in the Nehalem-EX chips, which puts it on par with AMD's Opteron 6100 Series processors, codenamed Magny-Cours.

Eight-core processor technology aimed at Unix crowd

Intel has specifically targeted Unix deployments with Nehalem-EX’s eight cores, which are considered the hardware component for server virtualization. The eight-core chip facilitates massive consolidations for Unix platforms running both legacy and modern OSes.

Dell has already implemented Intel’s eight-core processor in its new PowerEdge Server series. The PowerEdge M910 is a four-socket blade server that can scale up to 512 GB of RAM across 32 DIMM slots, while the R910 rack server is specifically aimed at Unix virtual migrations, large databases and virtualization shops. The R910 is a 4U Nehalem-EX based server with up to 1 TB of RAM using 64 DIMMs of memory.

With eight-core processors, Dell and HP can compete on Sun’s turf, touting superior migration and virtualization services for reduced instruction set computing (RISC) and Unix data centers.
Other manufacturers have also been quick to utilize the new Xeon 7500 series processors. The new Hewlett-Packard (HP) ProLiant DL580 Generation 7 (G7) server, for example, is equipped with DDR3 memory expandable to 1 TB and touts an advanced I/O slot configuration. The new HP G7 server line shows much faster performance than earlier ProLiant servers, especially when coupled with new eight-core Xeon processors.

Oracle has also jumped on board with its new Sun Fire X4800 server, which is powered by up to eight Intel Xeon 7500 series processors, 1 TB of memory and eight hot-swappable PCIe ExpressModules. For Unix administrators, that means more power, consolidation and easier virtualization of pre-existing platforms. These new machines are optimized to run Oracle Solaris, Oracle Enterprise Linux and Oracle VM, and are certified to run Red Hat Enterprise Linux, SUSE and Windows Server.

AMD Magny-Cours vs. Intel Nehalem-EX for Unix

A fierce Intel competitor, AMD’s 12-core processor, Magny-Cours, is already implemented in HP, Dell, Acer, SGI and Appro systems. The Magny-Cours chips offer twice the performance of the company’s Istanbul chips, support four channels of DDR3 memory for up to two and a half times the overall memory bandwidth, and have one-third more memory channels than Intel’s current two-socket offerings. AMD’s chips also have up to 12 DIMMs per processor, important for memory-intensive workloads in areas including Unix virtualization, databases and high-performance computing. Depending on the server hardware, that can translate to 512 GB of direct RAM support.

Also new with the 6100 Opteron line is the Advanced Platform Management Link (APML). APML provides an interface for processor and system management monitoring and controls system resources, including platform power consumption and cooling, through performance state (p-state) limits and CPU thermals.
Magny-Cours was a big step for AMD, and server manufacturers have responded. Dell, for example, released the PowerEdge R815 rack server geared toward lowering hardware costs and attracting Unix administrators. With AMD processors installed, the server is designed to deliver up to 48 processor cores (using four 12-core processors).

HP also released its new line of ProLiant machines that utilize the new 12-core AMD line. The ProLiant DL585 G7 and ProLiant BL465c G7 server blades come with the 12-core processor installed. The server blade is capable of handling two 12-core AMD 6100-series processors with up to 256 GB of allocated RAM. By comparison, the DL585 uses up to four 12-core Opteron chips and can handle up to 512 GB of DDR3 RAM.

With this new generation of servers, systems administrators now have much more granular control over their processor resources. Unix administrators also have more flexibility to use system resources as needed, and Unix systems can handle large database deployments for both physical and virtual platforms. Large Oracle environments, for example, can utilize new CPU capabilities and perform better in a virtual environment. Another benefit of new server technologies is licensing. For Unix applications that have licensing per-socket, you get more bang for your buck with more cores and still the same number of processors.

Unix admins see better VM memory, control with AMD Opteron

More memory is critical for supporting more VMs on a physical host system. AMD’s 6100 Opteron Series has four DDR3 memory channels versus three in Intel’s Xeon 5600, meaning a larger memory footprint. At three memory DIMMs per channel, for example, a Xeon can handle a maximum of nine DIMM slots per socket, but the new Opteron 6000 Series platform can handle 12. Using 4 GB DIMMs will translate into 36 GB on a Xeon versus 48 GB on an Opteron 6000 Series processor. This memory architecture also outshines the old Opteron, which had two memory channels for a maximum of eight DIMMs per CPU.

The increased memory channels are part of AMD's Direct Connect Architecture (DCA) 2.0, new to the Magny-Cours line. DCA 2.0 also increases the number of HyperTransport links between CPUs from three to four, providing faster CPU intercommunication. AMD also plans to further develop its AMD Virtualization technology (AMD-V) and AMD-P power technologies with its generation 2.0 capabilities. AMD-V 2.0 is now better able to support I/O-level virtualization and provide direct control of devices by a VM, and improves upon I/O performance within a VM. AMD-P 2.0 has added support for LV-DDR3 memory, APML and the much-anticipated AMD CoolSpeed Technology. CoolSpeed reduces p-states when a temperature limit is reached to allow a server to operate if the processor’s thermal environment exceeds safe operational limits.

With all of these additional cores and capabilities now at the administrator’s disposal, Unix environments and Windows shops can manage their data centers in a more efficient, blade-style environment. Admins initially hesitant to migrate or virtualize their Unix-based databases can now do so with more confidence. Data center administrators have the hardware resources available, giving them more control over data management and resource distribution.

Future processor technology developments

AMD and Intel will continue to release more advanced processors. AMD is expecting its “Interlagos” 16-core processor to be released sometime in 2011. Intel has also stated that its upcoming Westmere-EX chip will bring new capabilities through a scalable architecture and 32nm fabrication technology. The chip will increase the total number of cores from eight to 10, which means 20 threads can run in parallel. There will also be a memory upgrade from the previous Intel Xeon Processor 5500 Series by two times to 32 GB per DIMM, meaning a two-socket system would be able to support up to 2 TB of memory.

Unix admins can expect features from Intel and AMD that will make their legacy systems easier to manage. As computing environments grow older, new processor technology will be able to handle workloads more effectively in virtual environments. Upcoming technologies should increase ability to consolidate infrastructures using blade server technology that incorporates the new 16-core processor family. Intel and AMD will also focus on reducing power consumption. As dynamic resource allocation becomes more refined, power usage per watt will decrease. With the ability to stock RAM and processors into a blade server, you’ll be able to migrate and virtualize Unix systems, manage massive workloads between physical boxes and environments, and finally reduce your overall hardware footprint.

Thursday, December 2, 2010

Microsoft's view of patents a pre-emptive tool or revenue model?

As US Supreme Court agreed on Monday to give Microsoft a hearing related to a 2009 court order that required it to pay $290 million to i4i for patent infringement related to its high-profile product Microsoft Word, Microsoft itself has been on a buying spree collecting patents that cover a range of its products.

Also recently Novell's acquisition of Attachmate had raised questions about a deal component which included sale of 882 patents to a Microsoft led consortium which had raised concerns among FOSS community regarding the future of UNIX and OpenSUSE.

Microsoft's motivation in piling these patents has been a subject of much scrutiny. Some reports deem Microsoft's patent buying spree as a preemptive measure to protect itself from patent trolls and competitors. But others surmise that Microsoft could exercise intellectual property rights as assets on its balance sheet to generate revenues like patent trolls.

Usually companies the scale of Microsoft who are involved in using patents to develop products exercise patent rights to gain permanent injunction to protect their market share. Microsoft's lawsuit against Motorola and HTC could be an attempt to shield turf for its Windows Phone 7. Also the lawsuit was filed on Oct. 4, just days before Windows Phone 7 was released.

However, there are patent trolls like Acacia, from whom Microsoft recently bought patents, who do not seek to create products using patents but are more inclined to exercise IP rights to get license fees.

Microsoft has been involved in buying patents and signing cross-license deals with companies to build its portfolio of patents. While open-sources fear these developments and competitors go around buying patents in recourse Microsoft is sure brewing a storm in the world of patent wars.

Here is list of major patents acquisitions and cross-license deals signed by Microsoft in 2010:

Microsoft acquired Canesta, a 3-D image sensor chip-maker based in Silicon Valley in October. Canesta possesses 44 patents and has filed a few more with regard to electronic perception technology. Its technology is utilized in developing motion-sensing devices like Kinect.

Microsoft licensed 74 patents from Acacia Research Corp. and Access Co. Ltd, a Japanese firm that had acquired PalmSource. The portfolios of patents are related to smartphone technologies and also included IP rights over technologies created by Palmsource.

Microsoft also licensed patents from Acacia related to technology for enhancing image resolution in May and in January had licensed patents from Acacia subsidiary related to software compilers.

Microsoft and Amazon signed cross-license agreement that grants them access to each other's patent portfolio's which includes technologies related to Amazon's Kindle e-book and Linux servers.
Microsoft also has existing cross-licensing partnership deals with Apple, Samsung, LG, Xandros, Fuji-Xerox, NEC Corp., Seiko Epson Corp. and Toshiba Corp.

Source   http://hken.ibtimes.com/articles/87079/20101130/microsoft-patents-i4i-cross-patent-word-microsoft-office-attachmatke-linux-unix-novell-motorola-htc.htm

Monday, November 29, 2010

Whaaa? IBM Gets Stingier with Power Systems Deals

We are in the heart of the fourth quarter with questionable stats in the Western economies and very good growth in the emerging markets in China, India, Russia, Brazil, and a handful of other countries. With Power Systems revenue on the decline year-on-year, and against a pretty easy compare mind you, I would expect that IBM would be out there wheeling and dealing to crank up Power Systems sales. But thus far, Big Blue is behaving like it thinks it has its pricing right--or perhaps like it was a bit too aggressive on the pricing with this year's Power7 iron, in fact.

You heard that right.

IBM is tweaking things here and there in some deals to make me believe this. On November 16, in announcement letter 310-288, IBM added the Power 795 to a long-running trade-in deal for Power 595 machines that was last updated a year ago in November 2009 when we all knew Power7 machines were on the horizon. At the time, it became apparent that IBM was not going to get its high-end Power 795 into the field until the summer or fall of this year, and that left the company trying to give trade-in credits to customers buying Power6-based Power 595 iron. Trade-in credits on the November 2009 deal (in announcement letter 309-576) gave customers who were trading in older IBM Power iron as well as competitive Unix and proprietary boxes from Hewlett-Packard, Fujitsu, and Oracle trade-in credits that ranged from $25,000 to $200,000 on Power 595 machines using 4.2 GHz Power6 chips and from $30,000 to $240,000 on machines using 5 GHz chips. The trade-in varies depending on how many cores you activate--the more you buy, the larger the trade-in.

With this iteration of the trade-in deal, the Power 795 is added to the mix alongside older Power 595 gear and the trade-ins are a lot less generous for both new and old iron. On the Power 595 machines in the latest version of the deal, the 4.2 GHz boxes only get trade-ins that range from $7,000 to $37,000 and the 5 GHz boxes only get from $10,000 to $100,000. Interestingly, the trade-in credits for customers who go for 4 GHz Power7 processors range from a low of $5,000 on a machine with between six to 31 cores activated to $120,000 for a machine with all 256 cores in a maxed out machine activated. IBM had been roughly holding the price of a high-end Power System machine steady while doubling performance in the move from Power6 to Power7 machines, but the trade-in offers only half as much dough per core.

That would seem to indicate that IBM regrets setting Power7 system prices at the high end as aggressively as it did. But then again, for such big systems, these trade-in deals are the start of a conversation, not the final discount a customer can get. Particularly if it is a takeout deal that involves converting an HP, Fujitsu, or Oracle shop to IBM Power Systems.

To take part in the deal, IBM assigns what it considers a fair market value plus an additional monetary incentive to the box you are getting rid of. These takeouts look pretty stingy to me, considering what these machines cost, but there is not as vibrant a market for second-hand equipment as there was a decade ago so it is hard to say how stingy.

One other thing, and I have complained about this before. On this revised IBM Power 595 and 795 Server Trade-In Program, prior generations of System i and Power Systems iron running i5/OS or IBM i operating systems are not included. Which is perfectly asinine, and shops using older i boxes should behave like they are part of the deal. We are all one big, happy Power Systems family, after all, right?

On a second deal that was modified last week in announcement letter 310-290, older AS/400, iSeries, and System i servers are included as takeout machines alongside gear from Unix and proprietary HP, Fujitsu, and Oracle gear. The long-running deal is now called the Power Express Server Trade-In Program, and now includes the entry Power 710, 720, 730, and 740 Express servers as well as the Power 750 Express machines that were added to this deal back in June. In this case, the trade-in credits did not change on prior-generation Power 520, 550, 560 machines and the Power 750s announced in the spring. Credits range from $1,000 to $1,750 on Power 520s, from $2,000 to $9,000 on Power 550s, and from $3,000 to $10,000 on Power 560s. That didn't change, and IBM didn't knock off the Power 5XX entry and midrange boxes from this deal just like it didn't kill off the Power 595 in the above-mentioned deal. (This likely means there is some Power6+ gear in the barn at Big Blue and its reseller channel partners that the company is trying to move.)
The interesting bit to me is how stingy the trade-ins are on the entry Power7-based machines that were added to this second deal. On the Power 710 Express setup, IBM is giving a trade-in credit that ranges from $500 to $1,200, depending on the system configuration, and from $200 to $1,500 on Power 720 Express. The Power 730 Express machines you might buy have a trade-in credit ranging from $1,300 to $3,000. These entry servers with two processor cards have faster and more expensive Power7 chips, hence the better trade-in credits. The Power 740 machine, also a two-card box sporting faster chips, has a trade-in credit that ranges from $500 to $3,500. Generally speaking, on a performance-for-performance basis, the trade-in credits on the newer entry Power7 machines are less generous than the credits being given on older Power 520, 550, and 560 iron.

This seems contrary to me, unless IBM is regretting setting its prices so aggressively against some pretty compelling Xeon and Opteron servers down there in the entry and midrange part of the field. To raise prices would cause an uproar as well as causing it to redistribute its literature, but to be a little less generous on trade-ins for newer gear (which people will want) is one way of goosing Power Systems revenue a bit. In theory, at least. Practice out there in the midrange is a whole lot more messy than theory up in Somers, New York, where these decisions are made.