Monday, December 24, 2012

Migration from Unix to Linux and Cloud are key driving factors for our business”

Arun Kumar, General Manager, Red Hat India, talks to KTP Radhika about their growing middleware business and the business opportunities in open source Cloud.

 You have made a few acquisitions in the recent past and tried to diversify business. Where do you stand now?

Till 2006, we were a single-product firm offering Red Hat Linux enterprise solution. After acquiring JBoss, open source application server in 2006, we got a whole set of middleware products. JBoss stands as the most popular middleware application available in market today. In 2008, we bought Qumranet, a software company offering desktop virtualisation kernel-based virtual machine (KVM) technology. In Virtualisation, KVM is a very important open standard-based choice for companies and enterprises. Last year, we acquired Gluster, which has Cloud storage and big data services. Recently, we took over FuseSource, a provider of open source integration and messaging from Progress Software Corporation and business process management (BPM) technology developed by Polymita Technologies. We have gone from a single-product solution provider from a broad portfolio to the middleware stack and to Cloud computing. We have been diversifying our portfolio for quite some time, working with the open source development community and through major acquisitions.

 What are the key verticals driving growth for Red Hat?

Globally, government, telecom and the BFSI sectors have been the biggest adopters of open source solutions. In India too, the trend remains the same. The government is an integral part of our business in this market. Apart from the central government, many state governments are also adopting open source solutions. Indian CIOs are looking for cost effective and open infrastructure solutions, and Red Hat is committed to address these growing demands for open source solutions in Indian enterprises by ourselves and also through partners.

Migration is one of the key aspects of our business globally and in India. Many companies, who are still working with Unix-based servers and infrastructure, are now migrating to the Linux platform. We see a big opportunity in this shift from legacy infrastructure to commodity architecture. Red Hat is working with those customers to help them migrate to a commodity architecture. While enabling migration, we will also help them virtualise. Once they virtualise, it will be easy for them to move to Cloud. In India, there is also a much bigger opportunity in greenfield infrastructure especially in the public sector.
How is your middleware business catching up?

After the Linux stack and the infrastructure stack, the middleware is what most customers are evaluating more seriously from an open source perspective. The main reason for this is the cost factor. Open source middleware can be made available at the fraction of cost than the proprietary stack. Globally, many customers, who have deployed proprietary stacks have shifted to JBoss. We are seeing that trend in India as well. Red Hat, as an open source company is bound to help customers in this migration. We explain to them the architecture, migration plans, risks in migration and so on. In the middleware space, we have to take maximum care since it actually touches business applications.

Further, integration remains a key driver for the adoption of middleware technology. Rising mobile phone connectivity, Cloud and hybrid infrastructure and the explosion in data are bringing in new requirements for integration technology. This is exerting huge pressure on existing systems. Today, the opportunity for integration middleware software is evolving from traditional solutions that focus on foundational integration capabilities to higher-level integration capabilities. These include business rules management and BPM. Red Hat has already provided a foundational integration offering with JBoss Enterprise SOA Platform as well as a business rules management offering via JBoss Enterprise BRMS. Complementing these, existing products with additional technologies and talent like SOA and BPM acquired from FuseSource and Polymita will help us enhance position in the enterprise middleware marketplace.

CIOs are now looking forward to Cloud for enhancing efficiencies. What are Red Hat's offerings in this space?

Cloud is going to be universal. We have very specific cases where Cloud has helped CIOs. Cloud promises commodity architecture, interoperability, IT-on-demand and is highly scalable. These factors are there with open source architecture as well. Open source provides high interoperability since it publishes the source code. Linux also has a highly scalable architecture, and provides interoperability. We work on a subscription-based model. Open source Cloud software is widely used today on commodity servers, so it is a natural evolution to leverage open source for Cloud computing on commodity storage.

Over the past 12 months, we announced two solutions on Cloud. One is CloudForms, which is an open hybrid Cloud-management framework. Another is OpenShift, a Platform-as-a-Service (PaaS). We also have introduced a port for OpenStack for which the preview is on. The Cloud world is evolving everyday. In Open source Cloud, it is all about where the developers are. We are seeing a huge rush in the developing community. We are actually monitoring all the activities in there. We are actively involved in this space and it is one of our focus areas for the next three years.

Monday, December 17, 2012

Windows/Linux/UNIX Team Lead

Job ID GBS-0524431
Job type Full-time Regular

Work country USA
Posted 13-Dec-2012
Work city San Jose,CA
Job area Consulting & Services
Travel No travel
Job category IT Specialist

Business unit
ConServ
Job role Infrastructure Specialist
Job role skillset AIX/UNIX
Commissionable/Sales-Incentive jobs only No
Job description
IBM is leading the way to a whole new generation of intelligent systems and technologies, more powerful and accessible than ever before. Be a part of our progress as an Infrastructure Architect.

The Team Lead will be responsible for providing direction for a technical team of server (Windows, Linux, and Unix) and VMWare specialists.. This position requires a good fundamental overall understanding of a Windows Server environment, Linux (Red Hat), VMWare, and Linux, Unix infrastructure. It does not require current hands on experience in these areas, but sufficient knowledge to direct with understanding. This person will be the principal person in the development of a overarching virtualized architecture consisting of Windows, Linux, Unix, SAN, virtual infrastructures, and network infrastructure components.

Partner with some of the best minds in the industry, and apply your technical know-how to manage and operate IT hardware, software, communications and/or application solutions. You'll manage the resources required to plan, develop, deliver and support expertly engineered IT services and products that meet worldwide clients' needs.

The challenges don't stop there. You'll be responsible for preparing for new or changed services, management of the change process and maintenance of regulatory, legal and professional standards. You'll also manage the performance of systems and services, as well as brought-in services, including public networks, virtual private networks and outsourced services.
Deliverables include service-level reporting, risk and contingency planning.

Virtual Architectural development is a key focus area for this position. Additionally, providing support direction for persons who are supporting the individual component areas (Window, Unix, Linux) .Support-related responsibilities include sizing, troubleshooting and critical customer situations. In this position you will primarily apply your technical skills in an internal or external customer billable services implementation environment.
Come develop some of the most exciting technologies and product innovations in the world today. Join us.

Interested in learning more about IBM? Check out the IBM Global Careers newsletter.

The work location for this position is Seaside, CA.
Required
  • High School Diploma/GED
  • At least 2 years experience in LINUX or UNIX Systems Administration and Maintenance.
  • U.S. citizenship required
  • English: Basic knowledge

Preferred
  • Bachelor's Degree
  • At least 4 years experience in LINUX or UNIX Systems Administration and Maintenance.
  • At least 1 year experience in directing a team of infrastructure specialists
  • Security clearance of Secret - Active
  • Certified in Security +
  • English : Intermediate

Additional information

To be an official applicant to IBM, you must submit a resume and online application. Resumes submitted remain active for six months.

To ALL recruitment agencies: IBM only accepts resumes from agencies on our Approved Agency List. Please do not forward resumes to our applicant tracking system, IBM employees, or send to any IBM company location. IBM is not responsible for any fees related to unsolicited resumes.

IBM is committed to creating a diverse environment and is proud to be an equal opportunity employer.All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. 

Monday, December 3, 2012

El Reg Movember lads sprout mighty Unix beards for cancer charity alert print comment Your LAST CHANCE to donate cash to razor-free fundraiser

What is it about facial hair and computing? From the greats of SmallTalk and C++ to Java and Ruby, beards and moustaches are not a rare sight in the world of computer science.

James Gosling, Barjne Stroustrup, Alan Kay and Yukihiro "Matz" Matsumoto have expressed their genius not just through the curly braces in the machine but also the curly wurlies on their chins.

From data centers to dev teams, from the Ruby hipsters to Unix admins, top-lip warmers and jowl scratchers are more than well represented.

Inspired by these legends, El Reg staffers have been sending their top lips into the world unshaved for the whole of November in the name of a good cause: Movember.

The Reg Movemberists have been growing their tashes to raise money for the global fight against testicular and prostrate cancer. One in nine UK men will be diagnosed with prostate cancer in their lifetime, according to the Movember movement.

So far our Movemberists have raised £392. Last year, Mo Bros and Mo Sisters across the world raised £79.3m; this year we’re all hoping for more.


Monday, November 19, 2012

Setting limits with ulimit

Administering Unix servers can be a challenge, especially when the systems you manage are heavily used and performance problems reduce availability. Fortunately, you can put limits on certain resources to help ensure that the most important processes on your servers can keep running and competing processes don't consume far more resources than is good for the overall system. The ulimit command can keep disaster at bay, but you need to anticipate where limits will make sense and where they will cause problems.

It may not happen all that often, but a single user who starts too many processes can make a system unusable for everyone else. A fork bomb -- a denial of service attack in which a process continually replicates itself until available resources are depleted -- is a worst case of this. However, even friendly users can use more resources than is good for a system -- often without intending to. At the same time, legitimate processes can sometimes fail when they are run against limits that are designed for average users. In this case, you need to make sure that these processes get beefed up allocations of system resources that will allow them to run properly without making the same resources available for everyone.

To see the limits associate with your login, use the command ulimit -a. If you're using a regular user account, you will likely see something like this:
$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 32767
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 50
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
One thing you might notice right off the bat is that you can't create core dumps -- because your max core file size is 0. Yes, that means nothing, no data, no core dump. If a process that you are running aborts, no core file is going to be dropped into your home directory. As long as the core file size is set to zero, core dumps are not allowed.

Tuesday, November 6, 2012

Learning Unix for OS X Mountain Lion

This was a perfect size book for what I needed. What I needed was a book that would get me back up to speed quickly on UNIX. I have not used UNIX since the late 80's, and I have not used Mac since the early 90's.

The book starts off with a nice introduction to you Unix and why you would want to use it. The second chapter using the Terminal, is a nice introduction to Terminal's capabilities and shows you how to customization.

After the first two chapters the book starts digging into the details of what you can do with Unix on the Mac. I have listed the chapters below.

1. Why Use Unix?
2. Using the Terminal
3. Exploring the Filesystem
4. File Management
5. Finding Files and Information
6. Redirecting I/O
7. Multitasking
8. Taking Unix Online
9. Of Windows and X11
10. Where to Go from Here

As you can see from the chapter names the author covers some of the most important topics you need to know with a chapter dedicate to each.

The only chapter that is not completely clear on what it covers is Taking Unix Online. That chapter digs into remote login, web access, and FTP.

Although the book talks about using the code samples that come with the book, I could not find any online. That did not take anything away from the book. The samples are short enough to be able to type them yourself. I found that I wanted to read this book with my Mac turned on to try the different samples that the author covered.

The author has a great writing style that makes the book an easy read. I have some other UNIX books where that is not true.

Overall I think this is a great book for getting up to speed with UNIX quickly. This book along with my Macintosh Terminal Pocket Guide have covered everything that I had to look up to date.

Source  http://www.sys-con.com/node/2431885

Wednesday, October 10, 2012

The true legal vulnerability of Linux

A recent focus on the problem of software patents raises the question: could Linux be sued off the face of the Earth?

The not-so-random thought came up this weekend when I read the New York Time's special report, "The Patent, Used as Sword." This article, which I highly recommend you read when you get a chance, comprehensively examines the broad landscape of software patents without really coming down too hard on one side or the other. It does, I should add, leave you with the sense that something is wonky with this whole idea that billions can be spent and companies can go down just because one side's lawyers are quicker on the draw than others.

If I were the uncharitable type, I would say that this is also the article that shows where The Times finally gets a clue, but that's not really the case. The staff at the Grey Lady knows full well what's going on in the land of software patents, having covered it for quite some time. But reading the piece, I could now help but feel a little sense of relief that the Times was finally giving the topic the level of detail that a lot of us in the tech media have been hollering about for years.

I hope that the increased attention on the issue helps bring some pressure on the patent system to bring some real reform. But then I also hope I'll win the lottery, so there's that.

Reading some of the ugly anecdotes of companies going down, it was very hard not to imagine someone pulling the trigger like that on a Linux company. Which led to the central question of this blog: why hasn't someone fired the litigious missiles at Linux yet, and is it just a matter of time before Linux gets big enough that someone will make the effort?

Well, for one, companies have come at Linux already, and thus far their efforts have been either blocked in court or elegantly countered. 

The first, and most infamous major example of this was, of course, The SCO Group's clumsy attempt to try to muscle Linux users into a licensing program based on the (wrong) premise that they (a) owned the rights to Unix and (b) Linux infringed on Unix and therefore Linux users owed SCO some coin. (For further background, see Groklaw. Pretty much all of it through 2010.)

Now, as we all know, in 2003 SCO went after some big-pocketed customers (AutoZone and DaimlerChrysler) and the richest Linux player at the time, IBM. This would be the first real case that all of us at the time were waiting for and dreading: Linux now had some money behind it, and now someone wanted to get their share.

But then things went sideways for SCO, and the real test of the case, that Linux allegedly infringed on Unix, never got a strong test in court. Novell started making noises that SCO did not have the right to sue on the basis of the Unix copyrights because, lo and behold, Novell still owned those rights. SCO immediately sued Novell to shut them up and maintain that it owned Unix… and lost. 

Crash, boom, bye bye SCO.

There have been others, mostly directed at Red Hat, which has established that there's gold in them thar hills, and therefore the inevitable lawyers chasing after what they can get. Red Hat successfully fended off Acacia-owned IP Innovations in a 2010 East Texas jury trial (I know, go figure!), invalidating three of IP Innovations' patents.

But later that same year, Red Hat would settle with another Acacia-owned company, Software Tree LLC, for an undisclosed amount. 

Early in 2010, Red Hat was dragged into Canadian company JuxtaComm-Texas Software LLC's infringement claims for a patent (US Patent No. 6,195,662) covering the manipulation and exchange of data between computer systems, along with companies like British Air. A quick check of Pacer reveals that while that case in the Eastern Texas district court ended just last month, Red Hat seems to have settled again on August 22, 2011.

That's too bad, because in July of this year one of the other remaining defendants in the case, Pervasive Software, blew a hole in two claims within '662 and rendered the patent invalid. On September 19, JuxtaComm would end up being ordered to pay the remaining defendants' court costs.

Red Hat doesn't always settle… in another case it's defending itself against from Twin Peaks Software, Red Hat took the unique step in defending itself from a patent infringement claim by leveling a counterclaim that Twin Peaks is in copyright violation on mount, the file management app that is licensed under the GPLv2. Not only is Red Hat seeking GPL compliance, it's also going after Twin Peaks for damages and is seeking an injunction on Twin Peaks' own product sales.

On one level, using the GPL as a weapon is pretty funny, but in seriousness it really only work against companies that are actually practicing entities, not the non-practicing entities colloquially known as patent trolls.

It should not be taken away from this article that I'm picking on Red Hat. But as the most visible Linux player these days that's making some real money, Red Hat will no doubt find itself embroiled in other lawsuits.
But what about the Big One? The mother of all lawsuits destined to Destroy All That Is Penguin?
Many in the Linux community believe that, as a proxy for Microsoft, that SCO case was just that. Rather than the death through a thousand cuts approach all of these individual trolling cases seem to be taking, the SCO case was the biggest concerted effort against Linux that would have (if they could have pulled it off) potentially put Linux under the boot of a decidedly unfriendly company. But the case was bungled so badly, if there was a conspiracy, then I have little doubt that the actors in such a cadre were left feeling very burned by the experience.

Does that mean the Big One will never come? No one can predict for sure, but I have a gut feeling that no, it won't really happen. While there will always be the odd troll, I don't think it's in the interest of a major competitor to try a take down of Linux in the courtroom.

Why not? Simply because Linux is so pervasive now. Suppose patentgeddon did somehow happen, and Linux were forced to change drastically in order to comply? Do you know how many core infrastructures would be blown up were that to happen? And where would these IT departments go? To Microsoft? To Oracle? If one of these companies happened to be the Plaintiff, then I think not. More likely Linux customers would be so angry at having to give up or overhaul their Linux machines, they would start to resent said Plaintiff's products with a vengeance.

And then there's the simple reason of: why bother? Why would Microsoft try to destroy Linux when it can simply co-opt the technology and use it within its own products as needed? In fact, that's exactly what the company is doing, and is actually becoming a larger player in the Linux kernel space.
Free software, whether as in freedom or as in beer, just makes it less productive to mount a shock and awe campaign.

That's not to say Microsoft and other players won't keep trying to whittle away at Linux through various licensing campaigns and veiled threats. This is a war of slow attrition and the ultimately the goal will not be the destruction of Linux, but the control.

Wednesday, September 26, 2012

Army releases $5B ITES hardware RFP

The Army has released the solicitation for its $5 billion Information Technology Enterprise Solutions 3- Hardware contract.
Proposals for the five-year contract are due Oct. 22.

The Army will use the contract, known as ITES-3H, to buy a variety of IT equipment to support the Army’s IT infrastructure. The hardware has to be compatible with the Army’s network operations and net-centric operations.

The equipment to be purchased will fall into nine broad categories, such as Unix and non-Unix services; workstations, desktops and thin clients; storage systems; networking equipment; video equipment; and cable, connectors and accessories.

The contract will have multiple winners, who will compete with each other for task orders. ITES-3H has a three-year base and two one-year options.

ITES-3H replaces ITES-2H, which is held by CDW, Iron Bow Technologies, Dell Inc., GTSI Corp., IBM Corp. and World Wide Technology.

According to Deltek, $4.1 billion in task orders have been awarded under the contract over the last five years. Iron Bow and World Wide have captured the most business, each taking in over $1 billion in awards. IBM had the lowest amount with $136.6 million in business.

Source  http://washingtontechnology.com/articles/2012/09/25/ites-3h-solicitation-released.aspx

Sunday, September 9, 2012

Enterprise Networking: Centrify Brings Centralized User Management to Heterogeneous Networks

Sold in suite form, Centrify Suite 2012 Enterprise Edition brings several elements together to create a user and system management suite that leverages Active Directory while incorporating Linux/Unix/Macintosh systems into the mix. Centrify's centralized user management helps tame the new security paradigm created by multidevice user access scenarios to servers, applications and endpoint systems in the cloud and on-premises. The product's centralized management screen makes it easy to define roles, groups, zones and other container-level directory elements, while incorporating a single sign-on methodology for end users. With Centrify Suite 2012 Enterprise Edition, administrators can remotely deploy software, as well as enable advance auditing capabilities, transparently to users of Windows, Linux, Unix and Macintosh systems and servers. Unlike meta-directory synchronization tools, Centrify brings direct integration capabilities, allowing just a single directory to hold all account data, eliminating the need for cross directory synchronization.   

Wednesday, July 25, 2012

Zsh gets new stable release after four years

The developers of the Z shell have announced the release of zsh 5.0.0, the first major stable version of the tool since zsh 4.2 was released in 2004. Zsh is a shell for Unix and Unix-like operating systems that is mainly designed for interactive use, but is also suited to other tasks. The latest stable release of the project was zsh 4.2.7 in 2008.

The 5.0 release includes the features that have been developed in the unstable 4.3.x branch of the project; this includes support for highlighting, colour display and job control in non-interactive use. This new release also adds support for multibyte character strings. 

Version 5.0 of zsh includes new options that improve POSIX compliance, debugging and the history functions of the shell. New shell variables give users information about the current sub-shell level and the exact patch level of the tool, providing more granularity between release versions. Users can define patterns to be ignored by spelling correction. Z shell 5.0 also includes several new features that improve text completion, command evaluation and expansion of parameters. A full list of new features is available in the NEWS file for the release.

The source code for zsh is licensed under its own MIT-like licence and can be downloaded from the project's mirrors and FTP server.

Wednesday, June 27, 2012

Asian Internet Traces Roots to Kilnam Chon

Kilnam Chon brought the internet to Asia. And you’d have to say the move was successful.
In South Korea — where Chon led a research team that installed the first two nodes on Asia’s first internet protocol network — broadband connections are used in over 95 percent of households, a figure that eclipses every other country on earth. Singapore, Taiwan, and Hong Kong aren’t far behind, and all cast a shadow over the US, where broadband reaches about 60 percent of our homes.

Chon is also the founding father of multiple organizations that still drive the Asian internet — including the Asia Pacific Networking Group and Asia Pacific Top Level Domain Name Forum — and earlier this year, in recognition of his role in bringing the continent online, he was inducted into the inaugural class of the Internet Society’s (ISOC) Internet Hall of Fame, alongside such as names as Vint Cerf, Van Jacobson, Steve Crocker, Sir Tim Berners-Lee, and Elizabeth Feinler.

Though he pioneered the Asian internet at the Korea Institute of Electronics Technology, Chon isn’t Korean, and he spent his formative years outside the country. He was born and raised in Japan, and he completed his education in the US. After receiving a bachelor’s degree in engineering from Japan’s Osaka University in 1965, he enrolled in the fledgling computer science program at the University of California at Los Angeles, where many say the internet was born.

Chon tells us that at UCLA, he studied with Leonard Kleinrock, who oversaw the team that sent the first message across the ARPAnet, the US-Department-of-Defense-funded network that eventually morphed into the modern-day internet. But Chon wasn’t involved with the ARPAnet during his nine years at the University. He says the time wasn’t right for a foreigner like him to work on a US military network. “It was the time of the Vietnam War,” he says.

But after he moved to Korea in the late 1970s and joined the new Korea Institute of Electronics Technology — a government-funded laboratory dedicated to computer and semiconductor research and development — he and his colleagues built their own network. In 1980, his team proposed a national network to the Korean Government’s Ministry of Commerce and Industry, and the idea was shot down. But a revised proposal was accepted a year later, and they soon began work on what was then called the Software Development Network, or SDN.

Crucially, the team chose to build the network using the TCP/IP protocol that researchers in the States — most notably Vint Cerf and Bob Kahn — had built for a revised incarnation of the ARPAnet. According to Chon, they settled on TCP/IP because their network was part of a larger computing research project that was based on the UNIX operating system and TCP/IP dovetailed nicely with UNIX. But the pick paid added dividends in the decade to come, when TCP/IP gave rise to the internet as we know it today.
In the early ’80s, the United Kingdom and Norway also followed the ARPAnet’s move to TCP/IP, but Chon’s SDN was the first network to use the protocol outside of the US and Europe. The network went live in 1982, before the ARPAnet was officially converted to the internet protocol. By 1985, it was connecting about 20 universities, national research laboratories, and corporate labs. And two years later, it was plugged into several other parts of Asia, including Australia, Indonesia, Japan, Singapore, Malaysia, and Hong Kong.
It was also plugged into the US, but not with TCP/IP. In those days, it talked to the States using a dial-up connection based on the Unix-to-Unix Copy, or UUCP, protocol. A TCP/IP connection didn’t arrive until the first leased line between the Korea and the US was activated in 1990.

But Kilnam Chon didn’t seed the Asian internet. He was the driving force behind its evolution throughout the ’80s and well beyond. In 1985, he was the program chair of the Pacific Computer Communications Symposium, one of the first global internet conferences — and the last for several years. In 1991, he founded the Asia Pacific Networking Group, an organization whose sole purpose was to advance networking in the region. And in 1999, he launched the Asia Pacific Top Level Domain Consortium, which oversees the continent’s internet domain names. 

No, he never worked on the ARPAnet. But he worked on something far bigger.


Wednesday, June 13, 2012

Linus Torvalds in Helsinki for Chance at Finnish Tech Glory

Linus Torvalds is in Helsinki today, vying for a million-dollar prize and a whole bunch of hometown recognition.

He’s up for the Millennium Technology Prize — an award given out every two years by the Technology Academy Finland, a foundation set up to — you guessed it — promote technology in Finland. 

The prize is given to inventors of “life-enhancing technological innovation,” organizers say, and as far as many geeks are concerned, a free version of Unix that runs on low-cost Intel processors definitely fits that description. Torvalds’ competition: Shinya Yamanaka, the University of California, San Francisco researcher who invented an embryo-free method of creating stem cells for medical research.

The winner is set to be announced tomorrow. He will join the ranks of scientists who’ve pioneered somewhat obscure but critical innovations including DNA fingerprinting, organic light-emitting diodes, and the ARM 32-bit RISC microprocessor.

With the Millennium Technology Prize comes a dangerous-looking silicon-tipped sculpture and a big pile of cash. The total prize pool for this year’s award is €1.2 million, or about $1.5 million. The amount that actually gets handed over to the winner is up to the judges, though.

No matter who wins, Torvalds and Yamanaka “may well be talked about for centuries to come,” said Technology Academy Finland President Ainomaija Haarla, in a canned comment on the foundation’s website.

Not bad for someone who started his signature project from his bedroom back when he lived with his mom in Helsinki.

Torvalds lives with his family in Portland, Oregon, but both he and Yamanaka are in Helsinki for the award presentation. Torvalds left Finland in the late 1990s to come work in Silicon Valley, but he’s still a pretty big celebrity in his motherland.

Friday, April 27, 2012

SQL Server Application Support Analyst - Back Office

SQL Server Application Support Analyst - Back Office A top tier Investment Bank requires an experienced Application Support Analyst to join it's Payments and Funding area. You
SQL Server Application Support Analyst - Back Office

A top tier Investment Bank requires an experienced Application Support Analyst to join it's Payments and Funding area. You will be required to support a number of key applications in the area which covers Global Treasury Trading and Settlements.

Ideally the right candidate will already be working in a similar environment supporting these sort of applications. The enviroment is a mix of Windows and Unix platforms. The databases supported are SQL Server. So SQL Server knowledge is a must (triggers, stored procedures, SSAS/SSRS) combined with scripting in PERL, JAVASCRIPT for example.

You will be joining a global support team in a follow the sun model. It is a good opportunity to join a stable team following the ITIL framework.

For more information and a job specification please forward a Word CV in confidence.

SQL Server Application Support Analyst - Back Office Source http://careers.hereisthecity.com/job/sql-server-application-support-analyst---back-office-367409/

Monday, March 19, 2012

NetBSD 6.0 Beta Arrives With Cortex-A8, MIPS64 Support

NetBSD is a compact Unix-like open source operating system. NetBSD's origins are in version 4.3 of the Berkeley Software Distribution (BSD) that was developed as an Unix-derivative by the University of California, Berkeley until 1995.

NetBSD 6.0 Beta arrives as a substantial update with numerous new drivers, support for ARM Cortex-A8 processors, greater compatibility with Linux software, as well as Xen support for multiprocessor systems.

With support for ARM Cortex-A8 CPUs, as well as 64-bit MIPS processors, the operating system could become much more attractive for ultraportable systems that rely on a highly customized, low cost feature set. There are several interesting a feature additions that go along with a strategy that is apparently targeted at expanding the operating system's reach. These include the addition of Logical Volume Manager (LVM), which supports partitions that can be dynamically changed, as well as the replacement of the virtual machine monitor Xen2 with version 4.1, which now supports Citrix Xenserver and Xen Cloud.

NetBSD developers said that the 6.0 release is in Beta only at this time and they expect the release to include "some lurking bugs". The current stable release of NetBSD is version 5.1.2, which was introduced last month.

Sunday, February 5, 2012

Rocket Buys Zephyr For Terminal Emulation

Rocket Software consolidated its position in the market for terminal emulation software last week when it acquired Zephyr Development Corp., a Houston, Texas, company that developed the PASSPORT line of terminal emulation and integration solutions for accessing and modernizing applications running on z/OS, IBM i, Unix, and OpenVMS servers. It was the second IBM i-related acquisition for Rocket in a month, and sends a strong signal of support for the platform.

Zephyr was a small, feisty company that delighted in taunting its larger competitors in the mature terminal emulation space and pressing customers to question the value they get from the likes of Attachmate and IBM, the undisputed giants of terminal emulation. The company, which was founded in 1985, made migration from Attachmate's EXTRA! and Reflection products and IBM's Personal Communications (PCOMM) core staples of its marketing efforts, and would often sell its PASSPORT products as part of a larger migration kit. Indeed, Gregg Ledford, a Zephyr co-founder and formerly its CEO, once said he considered Zephyr to be an "Attachmate displacement company."

Rocket--like all terminal emulation providers not named Attachmate or IBM--has heavily leaned on a replacement approach to selling software as well. The company claims its BlueZone line of emulators (obtained via the Seagull Software acquisition in 2006) can save customers 50 to 90 percent in maintenance fees compared to other emulation suites.

Now, with the duo of PASSPORT and BlueZone rocking the emulation world, Rocket will have two fairly young and flexible emulation product sets (as measured by the relative difficulty that vendors showed in getting their legacy code bases to support new Windows OSes such as Vista and Windows 7) to attack the large-but-stationary market for emulation tools. The Massachusetts company will likely market the BlueZone products more strongly to IBM i shops, owing to that product's stronger 5250 heritage; Zephyr supported the midrange server but had a stronger business selling 3270 solutions to mainframe shops.

The Zephyr name will disappear as Rocket focuses on PASSPORT as another solution in its Application Development, Integration, and Modernization business unit. Rocket says it intends to continue developing and adding new features to the PASSPORT products (including PC to Host, Web to Host, and Host Integration Objects), and will keep the PASSPORT development teams in Houston and the United Kingdom intact. Zephyr's founders, including Ledford and David Muck, have elected to leave the business, Rocket says. According to Hoovers, Zephyr was owned by its employees.

Financial terms of the deal were not disclosed. Rocket (which is a private company that's backed by the private equity firm Court Square) would not share any additional details, including the number of employees or customers that Zephyr had. Rocket said Zephyr had "hundreds" of customers, and pointed to a list of 115 customers on the Zephyr web site with Fortune 200 names such as Bank of America, Comcast, Lockheed Martin, Progressive, Verizon, and Xerox.

This was the second acquisition of the year for Rocket, a 900-person company that was founded in 1990 primarily as an OEM developer for IBM, but which has grown into a $300 million-plus company (estimated as of 2009) as the result of 30-plus acquisitions over the years. The revenue figure has undoubtedly grown thanks to Rocket's December 2011 acquisition of the iCluster high availability business from IBM for an undisclosed amount. Rocket also bought the IBM i lifecycle management tool vendor Aldon less than a year ago (also for an undisclosed sum).

Rocket's position in the IBM i marketplace (and its maintenance revenue stream) has been bolstered by the Zephyr, iCluster, and Aldon acquisitions, but Rocket has many other irons in the IT fire, and insists its specialty remains software development, a claim bolstered by the fact that many of the executives of acquired companies stay on as heads of Rocket's various business units, brands, and software development labs.

The vendor's web site now touts 110 products across 13 brands, including Aldon; Arkivio for storage and archive software; AS for mainframe business intelligence software (formerly ASTRAC); BlueZone; CorVu for business performance management and BI; Folio/NXT for electronic publishing; iCluster; MainStar for mainframe systems management tools; M204 for mainframe data management; Networks for network management; Seagull; Servergraph for backup tools; and U2, for the UniVerse and UniData "multivalue" database managements Rocket acquired from IBM in 2009. When PASSPORT is tucked into place, there will likely be 113 products across 14 brands.