|Shares Out. (in M):||111||P/E||24.2x||20.8x|
|Market Cap (in $M):||4,268||P/FCF||21.2x||18.6x|
|Net Debt (in $M):||-604||EBIT||220||268|
Sign up for free guest access to view investment idea with a 45 days delay.
INFA is a uniquely positioned software company that has grown organically for ten consecutive years and generates a superb Return on Net Total Capital. (RONTC was 28.8% in 2012 even though 2012 was a difficult year for the business.) While the stock’s valuation may not look compelling at first glance, it is towards the bottom of its historic multiple range, and I will show why its return should be at least respectable form today’s price.
INFA is my first foray into infrastructure software. I thus found the learning curve for this story to be particularly steep because it required learning an entirely new technical vocabulary. I ended up writing a six page glossary as part of this effort, and I have included that glossary at the end of my report to expedite readers’ progress up this curve.
Computer programs such as ERP systems and CRM systems are usually built around the front-line workers who will be their primary users. The data contained within these systems, however, is often needed by other parts of the organization whether they be adjacent departments who use different, but related computer systems or management who is attempting to solve problems, understand trends, and plan for the future. Businesses’ data is thus scattered among many computer systems, a phenomenon that is referred to as “Data Fragmentation.” Furthermore, the overriding trend is for data fragmentation to become worse over time even though some forces occasionally reign it in.
The primary cause of data fragmentation is users’ understandable desire to adopt the software tools that best suit the needs of their immediate departments. This favors best-of-breed solutions over integrated solutions. (Integrated solutions are much easier for IT departments to support because the various modules are designed to work seamlessly with each other. These types of solutions are thus ideal for organizations with limited IT resources such as small businesses, municipal governments, and non-profits.) Another cause of data fragmentation is the general reluctance to simply replace legacy systems with newer systems. This results in multiple generations of IT hardware and software that run side-by-side and sometimes redundant systems that run in parallel to each other. Having personally led a “forklift upgrade” of a core IT system, I can readily attest to the pain, operational problems, short-term costs, and risks associated with migrating to a completely new computer system. To be sure, companies do eventually migrate their systems, but the pace of this is usually glacial, and this tends to result in environments that have been cobbled together over time instead of installed in one unified replacement cycle. A more recent cause of data fragmentation has been the adoption of cloud computing. This has resulted in hybrid IT environments where data is not only scattered among multiple systems behind a company’s firewall, but it is also scattered among multiple systems outside of the company’s firewall. An even newer source of data fragmentation comes from new data sources such as social networking and device data (i.e. the surprising amount of data being collected by smart phones and similar devices). These are relatively novel types of data being generated by third parties, and they are usually unstructured.
While data is created and stored in disparate systems, it often needs to be shared among those systems, and this is accomplished through digital pipes called “Integrations.” Unlike physical pipes, however, the data often undergoes various transformations as it passes through the pipe in order to protect the data (i.e. encryption) or change its form so that it will be usable in the target system. These integrations are a form of software called Middleware, and like all other software, they have to be developed, tested, maintained and sometimes audited. For example, if the either source system or target system are updated, the middleware between them might also have to be updated. I spoke to a systems integrator who explained that middleware often has a “step-child-like” status. This is because the developers are primarily focused on the applications because it is easier to appreciate how those will improve the performance of the workers who will be using them. Middleware, by contrast, resides in between computer systems and doesn’t impact end users directly. Importantly, most integrations are hand coded which means that they have to be developed, tested, and maintained manually, and this naturally becomes harder to do over time as data becomes more fragmented and the types of data and data processing systems become more diverse.
INFA’s flagship product is called PowerCenter, and it provides an automated tool that developers can use to create integrations within a visual development environment. Instead of coding integrations by hand, developers can use PowerCenter to drag and drop icons that represent the data sources and data targets and then define whatever calculations are required to transform the data as it passes through the pipe. INFA thus offers an automated means through which to create the pipes instead of building the pipes by hand.
A key part of INFA’s value proposition is that INFA updates the integrations that users have created through PowerCenter whenever the source or target systems are updated. INFA’s integrations are thus much easier to maintain, and they are also much easier to audit if needed. INFA has been supporting some of the source systems for as long as twenty years and is thus very familiar with their various generations. While it is technologically straightforward to code a single integration from Point A to Point B when A and B are already defined, it is incredibly complex to develop a platform that can integrate any given Point A to any given Point B regardless of how they are defined. INFA has many small competitors who provide specific integrations, but INFA’s platform is distinguished by the fact that it can create integrations between over 500,000 systems. This is particularly appealing to users who want to future-proof their IT environments since INFA allows them to quickly modify their pipes whenever a given program is replaced or modified.
INFA’s primary competition is hand-coding, so they don’t face commercial competitors in the majority of their deals. Hand-coding has remained the predominant approach for a variety of reasons. The first is that developers are fluent in writing code, so hand coding is often the most expedient solution even if it will be more costly over the long-term. The second is that the decision about how to create the integrations is usually made by a team leader who likely has a preference to use the developer resources that he already has instead of requesting funds to purchase an off-the-shelf integration solution. A third dynamic is that developers see integrations as a source of job security since they may be the only ones capable of maintaining the integrations over time. Lastly, while I was not able to confirm this with INFA, I suspect that third party systems integrators have a huge financial incentive to hand code integrations. The enterprise software market is roughly $40BB annually, but only $8BB of that is spent on the software itself--$32BB is spent on services installing and maintaining the software. Systems Integration contracts are usually structured as Time & Materials contracts, and this encourages those vendors to create solutions by hand.
A variety of factors, however, are making hand-coding a less tenable approach over time. The first is the inexorable trend of data fragmentation which requires developers to acquire more and more skills as types of systems and data proliferate. This encourages them to adopt INFA’s platform. Another factor was the Great Recession which led CIO’s to examine their IT organizations on a more granular level than they had in the past in an attempt to improve efficiency. As CIO’s looked for automation opportunities, Informatica’s products often rose to the surface, and this created a sea-change in INFA’s ability to present to high level executives. Prior to 2007, INFA’s CEO, Sohaib Abbasi, was often unable to secure meetings with CIO’s, but he is now readily able to get such meetings. Another, though less discussed dynamic encouraging the adoption of INFA’s products is the fact that corporate IT customers are increasingly asking their systems integrators to bear some of their project’s risks, primarily through fixed priced contracts. (Interestingly, this trend seems to be secondary consequence of cloud computing which is showing organizations the benefits of pre-packaged, fixed-price solutions.) INFA is thus receiving more interest from systems integrators since INFA’s platform offers a straightforward and predictable means by which to integrate various systems.
One of the most attractive features of INFA’s franchise is that they face commercial competitors in less than half of their deals, and when they do face Tier 1 competitors, Informatica wins over 75% of the time. INFA competes with IBM in 20-25% of their deals, but IBM’s solution (“Essential”) is typically sold as part of a package sale with other IBM products. INFA competes with ORCL, SAP, and MSFT relatively infrequently. Interestingly, ORCL offers its own Data Integration product, but ORCL is also INFA’s #1 OEM customer because ORCL’s own customers want a Data Integration solution that can work with whatever systems they currently use or may chose in the future. This gets at another critical facet of the INFA franchise: INFA’s leadership position is structural. Customers prefer using an independent Data Integration solution because they want complete freedom for deciding how to build their computing environments. For example, they don’t want to be biased toward ORCL’s other products because they are already using ORCL’s Data Integration product. INFA’s leadership position is further reinforced by the application software companies themselves. These firms are competing with each other which limits their willingness to share the technical information required to pre-build integrations between their products.
Over the last several years, INFA has extended their product line into adjacent areas through acquisitions and R&D. For example, data is only useful if it is trustworthy so INFA offers a Data Quality product to ensure that data is reliable. The company has also moved into Master Data Management (MDM) and Information Lifecycle Management (ILM). These complimentary product lines are both discussed in the glossary. These adjacent product areas are relatively new for INFA, and only 43% of INFA’s active projects during 2Q13 included even one of INFA’s new products. Management thus believes that cross-selling these products into new and existing customers is INFA’s largest growth opportunity, though there is also ample opportunity to grow revenue from their core products.
As noted previously and shown in the financial tables at the end of this report, INFA realized phenomenal revenue revenue growth from 2003 through 2011, even growing organically throughout the Great Recession. While INFA’s results during 2012 were strong on a relative basis, they were very disappointing compared to the rapid revenue growth and margin expansion of previous years. These problems arose from very discrete sources that the company has made significant progress resolving, but questions on their most recent earnings call, along with the stock’s multiple suggest that the company still faces some skepticism about their ability to resume double digit revenue growth and healthy margin expansion.
Prior to 2012, INFA had two overlaid salesforces. INFA’s account managers sold the company’s core products, and an independent, specialized team sold INFA’s newer products such as MDM and ILM. This approach led to competition between these two sales teams. Moving into 2012, management thus decided to merge these teams, but they made numerous mistakes in this reorganization. The first was the classic problem that companies face when their salespeople know they will soon be reassigned to a new territory: they stop diligently cultivating their sales pipeline because they know that they won’t be the ones to benefit from those efforts. So while INFA’s sales pipelines looked fine superficially moving into 2012, their quality had actually deteriorated significantly. The consequences of this were amplified by the fact that the territory changes were extensive. A second miscalculation came from the fact that both of the original salesforces overestimated their familiarity with each other’s products. They were thus poorly positioned when they began attempting to sell new products into new accounts. These changes were a step in the right direction, but this new structure still wasn’t quite what the company needed, and the shift was also poorly executed. These problems surfaced in 2Q12 when license revenue abruptly declined 17.8% after having grown consistently for the prior eleven quarters.
In some ways, these changes reflected some basic growing pains as INFA transitioned into a much larger, more complex company whose sales effort was beginning to require a deep understanding of its customers’ needs and IT environments—especially when selling their newer products. Paul Hoffman joined INFA as their Executive of Worldwide Sales in 2005, and during the 2012 Analyst Day, he walked through how INFA’s sales organization had changed radically over the prior seven years. When Mr. Hoffman joined, INFA generated all of its roughly $200MM in revenues by selling one product (ETL) into one vertical market (Data Warehousing). They did this solely through territory managers which made sense given their limited product line and limited sales resources. By 2011, however, INFA’s revenue had tripled, and the company had gained several complex, but complimentary new products which offered them the opportunity to cross-sell extensively into their customer base. Pursuing this opportunity, however, required a much more sophisticated and coordinated sales effort. While INFA did not mention this publically, by 2012, the sales organization had grown to a point beyond what Mr. Hoffman was interested in managing, and the company had already begun to search for his successor. The problems that surfaced in 2Q12 accelerated this search, and the company installed John McGee in this role in July, 2012. Mr. McGee implemented a systematic “sales cadence” in which the salespeople have deliverables every week, including updates about opportunities for future quarters. This is resulting in better visibility, a better qualified pipeline, and earlier notification of when INFA needs to change course to better pursue an upcoming opportunity.
Another problem that surfaced during 2Q13 was trouble within INFA’s European organization. This was primarily due to turnover in the region’s leadership, especially their regional sales manager. While the current European Sales Manager is the fourth one in recent memory, management’s commentary over the last two quarters suggest that this region finally has a permanent management team and is successfully implementing the changes that have proven effective within the American sales organization.
INFA thus appears to be well on its way to building a sales organization that can realize its larger market opportunity. Ongoing efforts in this area, however, as well as incremental investments in R&D are depressing operating margins, and management indicated that margins will not return to their prior peak until sometime after 2014. I believe this testifies to the size and longevity of INFA’s market opportunity, though it is admittedly weighing on profitability in the interim.
Numerous high-level and discrete trends should allow INFA to return to double digit revenue growth. The first of these is the growing complexity of IT infrastructures which is making hand coding progressively less tenable and encouraging the adoption of INFA’s automated tool. As noted previously, the Great Recession and the emerging trend toward fixed-price systems integration contracts should augment this general shift.
One of the attractive facets of the INFA story is that their product is flexible and can thus address a wide array of customer needs (a.k.a. “use cases”). Below are some examples of use cases that were given on the 2Q13 CC:
INFA should also benefit from a trend towards performing more business analytics. While Business Intelligence (BI) has been around for a long time, the original BI systems were run by a handful of extremely smart people who understood how to use them and would run reports to be disseminated throughout the organization. The newer BI systems, however, have evolved so that they are easier to use and produce more useful information, both of which are increasing adoption. (I suspect that lower cost computing power and data storage are further improving BI’s value proposition.) Demand for BI systems is also increasing as more constituents become familiar with what data is available and how to use it. Lastly, businesses are becoming more familiar with how to customize the presentation of data and distribute it to various constituents on an ongoing basis through tools such as “Dashboards.”
Big Data represents an advanced form of BI that early adopters are currently experimenting with. While Big Data is often referred to as a distinct market opportunity, I consider it to be an extension of this broader trend of conducting more analytics. This benefits INFA because they provide the pipes through which to quickly connect all of the source systems to the analytic systems. INFA is well positioned to serve Big Data initiatives for two reasons. The first is that there there are six different types of new analytic platforms that can be used to conduct Big Data analyses, and users are just beginning to determine which platforms are best suited to address various needs. PowerCenter, Big Data Edition is thus particularly attractive for creating the related integrations because it allows users to map the data once and then quickly redeploy the integration to a new analytic platform if needed (See “Data Mapping” in the glossary). A second advantage PowerCenter offers with Big Data projects applies specifically to one of the six new types of analytic platforms that is called Hadoop (See “Apache Hadoop” in the glossary). Hadoop allows users to harness thousands of computer processors to analyze a vast amount of data, but creating integrations for it requires learning its programing language, MapReduce. MapReduce is a relatively new programming language, so there isn’t a wide set of developers who are already fluent with it. PowerCenter, Big Data Edition allows users to sidestep the need to learn this language by simply creating the integration within PowerCenter which, by contrast, already has a large population of experienced users.
A longer-term growth opportunity could come from a recently introduced iteration of INFA’s core technology called “Vibe.” When INFA started about 20 years ago, their goal was to abstract away the details of the underlying processing environment from what developers were trying to do with the data. The goal of abstracting details like the OS, HW platform and Database environment was to make it easier to create an integration and ensure that this integration would be future-proof so that if the processing environment changed, the integration could easily adapt. This concept of abstracting away the details so that developers could easily access and use data has grown more compelling as computing environments have grown radically more complex over the last two decades.
Vibe is a new embodiment of INFA’s core technology that allows it to operate as a standalone “Virtual Data Machine” (VDM). INFA is attempting to do for data what Java did for applications. The Java Virtual Machine was a virtual computer that could run any program coded in Java on a multitude of host platforms. This allowed developers to “Write Once, Run Anywhere.” They could write their programs in Java and then deploy those programs on a wide variety of host computers because the Java Virtual Machine was bridging the gap between the Java code and the host environment. Programmers thus didn’t need to understand the host environment or modify their code to run in different host environments. Vibe unifies INFA’s core technologies and allows them to be deployed on any platform in a virtualized manner. Developers will thus be able to “Map Once, Deploy Anywhere.”
Vibe is designed to simplify the data infrastructure in two ways:
So Vibe can access and process any kind of data in any kind of system, hence the description, “Virtual Data Machine.” Vibe aims to help developers by reducing the number of skills they need to master and to help IT staff by simplifying the infrastructure that they have to manage.
In the medium-term, INFA will promote a “Vibe Inside” Software Development Kit (SDK) that programmers can embed within their applications. This SDK represents an attempt by INFA and industry partners to promote a modern, standards-based data infrastructure for the next generation of data centric applications. Over the long-term, INFA plans to adapt Vibe for the industrial internet which is quite relevant since the industrial internet is predicated on being able to harness a wide variety of machine interaction data in real-time.
For more information about Vibe, please refer to the following two articles:
It is obviously too early to determine whether INFA will successfully position Vibe as a ubiquitous software component that developers use to unify a multitude of computing systems and devices, but I believe that INFA is the best positioned company to attempt this. During their 2013 analyst day, management noted that re-packaging INFA’s technology as an embeddable “Virtual Data Machine” is leading customers to quickly recognize additional ways that they can use INFA’s technology beyond the use cases that they considered when this technology was only offered as packaged software. So while Vibe is still in its early stages, it appears to meet a real need.
Prospects for INFA Stock
INFA has unfortunately inched up since I began writing my report, but the price is still at a good entry point. At $38.35, using 2014 estimates, INFA is trading at:
As shown in the tables below, this is around the middle of its historic P / Adj. E range, the middle-to-lower part of its historic P / FCF range, and the lower portion of its historic EV / EBITDA range. When viewing these figures, it is important to remember that R&D and S&M investments will continue to weigh on margins in 2014, so 2014’s earnings are below INFA’s potential.
|Net P/Adj. E|
My long-term forecast is for INFA to grow revenues at 11-13% annually, though figures provided during their 2013 analyst day as well as their historical growth rate suggest that 11% is likely to be more of a “base case” scenario. My forecast assumes that margins resume their climb in 2014, but at a more gradual pace than prior years. I’m forecasting for INFA to regain their prior-peak OM in 2015 or 2016. I am also forecasting 3% annual growth in INFA’s diluted share count because they have issued an unusually high number of stock options. The company, however, does generate abundant FCF, so repurchases could make this dilution less than I’ve forecasted. Altogether, these assumptions yield 2017 Adj. EPS estimates of $2.21 to $2.55. Using exit multiples of 20.0x and 22.0x, respectively suggests exit prices between $53.86 to $66.24 after you add in cash per share which should nearly double from today’s level. Annualized over four years, this implies annual returns of 8.9% to 14.6%. (You could arguably annualize this over 3.5 years which would imply annual returns of 10.2%-17.0%.) I believe there is upside to both my conservative and aggressive scenarios.
Historic financial results are shown below:
|Fiscal Year End: December 31|
|In Millions, Except for Percentages, per Share Amounts and Supplemental Metrics|
|Consulting, Education & Other Rev.||41.6||35.2||34.3||43.7||53.6||64.7||73.7||71.1||99.5||116.1||129.8|
|% Change (Yr./Yr.)|
|Consulting, Education & Other Rev.||-15.5%||-2.6%||27.4%||22.6%||20.8%||14.0%||-3.6%||40.1%||16.6%||11.8%|
|% Change (Yr./Yr.)||5.2%||6.9%||21.7%||21.4%||20.5%||16.5%||9.9%||29.8%||20.6%||3.5%|
|Appx. Organic Rev. Growth (Yr./Yr.)||4.1%||3.9%||21.7%||19.4%||18.8%||12.7%||4.3%||18.1%||18.3%||1.3%|
|Cost of License & Sub. Revenue||6.2||3.1||3.8||4.5||7.0||3.7||3.3||3.1||4.5||5.0||4.5|
|Cost of Maint. & C,E&O Revenue||39.2||38.8||40.3||46.8||56.9||67.5||78.3||74.4||97.9||115.4||121.8|
|Total Cost of Revenue||45.4||42.0||44.1||51.2||63.9||71.2||81.6||77.5||102.4||120.4||126.3|
|License & Sub. Gross Profit||93.8||91.5||94.2||115.7||139.1||171.6||192.5||211.2||290.6||348.7||316.5|
|Maint. & C,E&O Gross Profit||56.3||72.1||81.4||100.5||121.6||148.4||181.7||212.0||257.1||314.7||368.8|
|Total Gross Profit||150.0||163.6||175.6||216.2||260.7||320.1||374.1||423.2||547.7||663.4||685.3|
|License & Sub. Gross Margin||93.8%||96.7%||96.1%||96.3%||95.2%||97.9%||98.3%||98.5%||98.5%||98.6%||98.6%|
|Maint. & C,E&O Gross Margin||58.9%||65.0%||66.9%||68.2%||68.1%||68.7%||69.9%||74.0%||72.4%||73.2%||75.2%|
|Total Gross Margin||76.8%||79.6%||79.9%||80.8%||80.3%||81.8%||82.1%||84.5%||84.2%||84.6%||84.4%|
|% of Sales||23.3%||23.0%||22.4%||15.7%||16.0%||16.9%||15.0%||14.7%||15.2%||15.5%||15.9%|
|% of Sales||44.4%||42.1%||42.8%||44.4%||41.3%||39.0%||37.7%||37.3%||36.6%||34.2%||36.0%|
|% of Sales||10.4%||10.1%||9.5%||7.7%||7.2%||7.9%||7.2%||7.5%||6.2%||6.2%||6.4%|
|% of Sales||-1.4%||4.3%||5.3%||13.1%||15.8%||18.1%||22.2%||25.0%||26.2%||28.7%||26.2%|
|Share Based Compensation||0.2||0.8||3.0||0.7||14.1||16.0||16.3||17.9||23.4||33.3||42.8|
|Amort. of Acquired Tech.||1.0||1.0||2.3||0.9||2.1||2.8||4.1||8.0||13.3||19.5||22.0|
|Amort. of Intang. Assets||0.1||0.1||0.2||0.2||0.7||1.4||4.6||10.1||9.5||7.7||6.6|
|Facilities Restruct. Charges (Benefits)||17.0||112.6||3.7||3.2||3.0||3.0||1.7||1.1||(1.1)||2.2|
|Other Charges (Benefits)||4.5||1.3||(11.1)||(1.7)||(0.5)||0.3||2.8|
|Other Income (Expense)||1.3||3.3||(0.1)||(0.7)||(0.6)||0.6||0.9||1.2||0.2||(1.5)||(1.7)|
|% of Sales||-7.5%||4.6%||-47.0%||13.5%||12.8%||16.0%||20.2%||17.9%||18.6%||21.3%||17.0%|
|Income Tax Expense||0.9||2.1||1.2||2.2||5.5||8.0||36.0||25.6||34.8||49.1||44.6|
|Tax Impact of Non-GAAP Adjustments||4.3||(1.9)||10.4||14.1||17.3||22.4|
|Adj. Net Income||2.8||13.8||13.7||39.3||57.7||73.5||74.8||89.6||119.2||159.9||147.1|
|Adj. Diluted EPS||$0.03||$0.16||$0.16||$0.43||$0.62||$0.75||$0.76||$0.91||$1.13||$1.43||$1.31|
|Basic Shares Out.||79.8||82.0||85.8||87.2||86.4||87.2||88.1||88.0||92.4||104.0||107.9|
|Diluted Shares Out.||79.8||85.2||85.8||92.1||92.9||103.3||103.3||103.3||109.1||112.5||112.1|
|Net Total Capital||4.4||53.6||(57.5)||(51.6)||44.5||45.6||116.0||219.6||374.7||389.8||567.5|
|Total Rev. / Employee (000's)||239||258||268||290||291||303||306||298||335||335||302|
|Non-Maint. Rev. / Employee (000's)||173||163||162||177||179||186||181||170||203||201||168|
|Non-Maint. Rev. / S&M Employee (000's)||471||466||470||498||504||525||511||482||593||591||491|
|Total Debt / Total Capital||0.0%||0.0%||0.0%||0.0%||50.3%||42.4%||38.3%||29.4%||23.7%||0.0%||0.0%|
|Tangible Book Value||$2.78||$2.37||$1.29||$1.49||$0.43||$1.29||$0.98||$1.28||$1.52||$4.40||$4.69|
|Net Cash (Debt) per Share||$3.11||$2.77||$2.95||$2.98||$1.97||$2.59||$2.32||$2.55||$4.32||$5.35||$4.78|
|Ave. Net Cash (Debt) per Share||$2.98||$2.94||$2.86||$2.97||$2.47||$2.28||$2.45||$2.44||$3.43||$4.84||$5.07|
|% Change (Yr./Yr.)|
|% Change (Yr./Yr.)||5.2%||6.9%||21.7%||21.4%||20.5%||16.5%||9.9%||29.8%||20.6%||3.5%|
|% of Total Revenues|
|Cash & Equivalents||105.6||82.9||88.9||76.5||120.5||203.7||179.9||159.2||208.9||316.8||190.1|
|Deferred Tax Assets||18.3||22.3||23.7||22.7||21.6||23.4|
|Prepaid Expenses & Other||8.7||5.1||7.8||9.3||10.4||14.7||12.5||15.3||32.3||23.2||29.4|
|L-T Deferred Tax Assets||0.5||7.3||8.3||18.3||23.0||24.1|
|Accrued Liabilities & Other||24.4||25.7||16.1||17.4||26.5||25.4||34.5||37.4||50.2||58.9||64.5|
|Accrued Comp. & Related Exp.||12.7||14.3||15.7||20.5||25.8||33.1||29.4||41.5||56.3||58.0||55.4|
|Income Taxes Payable||2.1||2.0||3.1||4.6||5.2||0.2||12.9||1.2|
|Accrued Facilities Restructuring Charges||4.8||4.6||20.1||18.7||18.8||18.0||19.5||19.9||18.5||17.8|
|Convertible Senior Notes||200.7|
|Convertible Senior Notes||230.0||230.0||221.0||201.0|
|L-T Accrued Facilities Restructuring Charges||14.9||10.5||89.2||75.8||65.1||56.2||44.9||32.8||20.4||5.5|
|L-T Deferred Revenues||7.2||8.2||7.0||13.7||8.8||4.5||7.0||6.6||8.8|
|L-T Deferred Tax Liab.||2.2||0.5||0.3||2.5|
|L-T Income Taxes Payable||6.0||20.7||12.0||12.7||16.7||21.2|
|Total Liabilities & Equity||365.2||402.8||409.8||441.0||696.8||798.6||863.1||989.6||1,189.6||1,380.7||1,512.2|
|Cash Flow Model|
|Fiscal Year End: December 31|
|In Millions, Except for Percentages & per Share Amounts|
|Depreciation & Amortization||10.5||11.2||9.3||9.2||10.1||10.5||5.6||5.5||6.1||6.3||12.3||14.7||14.5|
|Share Based Compensation||0.4||0.9||3.4||0.7||14.1||16.0||16.3||17.9||23.4||33.3||42.8||54.0||65.5|
|Amort. Of Intangibles & Acq. Tech.||1.1||1.2||2.5||1.1||3.6||4.2||8.7||18.0||22.9||27.2||28.6||29.0||16.4|
|Non-Cash Facilities Restruct. Chgs.||1.9||21.6||3.7||3.2||3.0||3.0||1.7||1.1||(1.1)|
|Accrued Facilities Restruct.Chgs.||10.4||(4.5)||94.1||(18.3)||(13.8)||(12.4)||(12.6)||(13.2)||(14.8)||(14.4)||(24.0)|
|Free Cash Flow||1.8||17.9||14.4||20.3||53.7||70.0||71.7||90.4||116.0||155.4||138.7||175.7||206.3|
|Cap. Ex. (% of Sales)||3.5%||1.2%||5.7%||3.7%||1.2%||1.5%||1.0%||0.7%||1.1%||1.6%||1.7%||1.5%||1.5%|
|Interest Expense (Income), Net||(5.0)||(3.8)||(3.4)||(7.3)||(12.4)||(14.6)||(6.9)||0.7||2.7||(2.7)||(3.5)||(3.2)||(3.7)|
|Income Tax Expense||0.9||2.1||1.2||2.2||5.5||8.0||36.0||25.6||34.8||49.1||44.6||42.1||61.6|
|CF / Share||$0.11||$0.24||$0.31||$0.33||$0.62||$0.74||$0.74||$0.91||$1.13||$1.49||$1.36||$1.70||$1.93|
|FCF / Share||$0.02||$0.21||$0.17||$0.22||$0.58||$0.68||$0.69||$0.88||$1.06||$1.38||$1.24||$1.58||$1.80|
|EBITDA / Share||$0.06||$0.22||$0.29||$0.27||$0.54||$0.67||$1.02||$1.16||$1.47||$1.91||$1.73||$2.05||$2.43|
|Est. Interest Inc. per Share, Net||$0.04||$0.03||$0.03||$0.05||$0.09||$0.09||$0.04||$0.00||$0.00||$0.02||$0.02||$0.02||$0.02|
|Cash & Equivalents||604.2|
|Diluted Shares Out.||111.3|
|Net Cash (Debt) per Share||$5.43|
|Net P/Adj. E||24.2||20.8|
|EV / EBITDA||16.1||13.5|
Agile SW Development: a group of SW development methods based on iterative and incremental development where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible repsonse to change. This approach promotes forseen interactions throught the development cycle.
Time-Boxing: an approach where the deadline is absolutely inflexible, but the scope can be reduced as needed. The alternative is Scope-Boxing where the scope is fixed. When Scope-Boxing is used, additional time and resources are generally required which results in cost and time over-runs. Time-Boxing, by contrast, requires the project stakeholders to prioritize what they want and thus determine which elements will be completed first. My personal experience suggests that Time-Boxing is a better approach because it addresses the most important functions first and is better alligned with customer’s tendancies to realize what they actually need / want towards the middle or end of the project. It consequently avoids wasted effort due to misguided innitial specifications.
Apache Hadoop: an open source version of MapReduce. It enables applications to use thousands of computationally independent computers to process petabytes of data.
MapReduce: a SW framework for processing huge data sets using a large number of computers working in parallel. In the “Map Step,” a master computer converts the input into a number of sub-problems that are then passed along to worker computers. (Worker computers may then further divide their sub-problems to be solved by even lower level worker computers.) Eventually, however, the sub-problems are solved, and the “Reduce step” occurs when the master computer collects the sub-answers and combines them to form the final answer to the problem that it was originally asked. Note that the computation in this framework is distributed among a number of processors, and they can each work in parallel. In practice, however, parallel processing is limited by the number of independent data sources and the processors near each data source.
Appliance: pre-integrated package of hardware (processors, storage media, backplane, etc.) and SW (OS and application SW) that performs a specific function. Historically, users cobbled together their own systems by implementing applications on general purpose hardware running general purpose operating systems. This was complex to create and maintain. In response to customers’ desire for turnkey solutions, vendors such as Oracle began to offer pre-packaged systems designed for specific applications, such as running a database. By greatly limiting the number of SW and HW combinations that are available, solution deployment becomes easier, and most problems can be solved through the appliance’s management software.
Application Programing Interface (API): a specification for how software components should interact with each other. It is usually a library that specifies routines, data structures and other variables.
Brokerless Messaging: in many systems where applications communicate with one another, the messages all pass through a “broker” which sits in the middle of the system and routes the messages. The alternative is brokerless messaging where the applications communicate directly with each other.
The differences between these architectures is roughly analogous to two different types of air traffic systems. A brokered system is like a single national hub-and-spoke network where all flights fly to and from a single national hub. A brokerless system is like a network where all the flights are non-stop. In computing, the primary advantage of a brokered system is that the applications don’t need to know the location of the other applications—they only need to know the location of the broker. This makes the connections easier to create and manage. Another advantage is that the broker holds the messages and thus serves as a kind of buffer between the applications. Sending and receiving applications thus don’t have to be available at the same time, and if the sending application breaks down, the messages can still be retrieved at the broker. One disadvantage of brokered systems is that they require considerable network resources because the data has to pass throught the broker every time it moves between applications. Returning to the airport example, the “data” is taking multi-city business trips, but it has to pass through the hub each time it changes cities, and this requires more flights. The second disadvantage is that the broker becomes a major bottleneck, and the various applications might have to idle as they wait for it. Returning to the airport analogy again, if all passengers had to pass through a single national hub, that hub would become extremely congested with frequent delays.
Database Management System (DBMS): an application used to define, create, query, update, and administrate databases.
Data Integration: the process of combining data from different sources in order to provide a unified view of the data. The need for data integration is growing as the number of data sources proliferates as does the number of uses for this data.
Data Mapping: the process of mapping data elements between two different data models. For example, two different computer systems might store order numbers in different places. Data mapping identifies those two different places as the first step of enabling those computer systems to communicate with each other. Data mapping can be performed for other purposes as well. One example would be identifying data relationships in order to understand the data’s lineage (i.e. “Where did this number come from? Did it get changed as it passed through the Data Warehousing process?”). Another example would be identifying and eliminating redundant data as information from multiple databases are consolidated into a single database.
Data Masking: the process of hiding sensitive data such as SSN’s or credit card information in order to restrict access to such data.
Data Warehouse: a central repository of data that has been gathered from various operational applications in order to generate reports and conduct analysis. A generic example of a data warehouse is shown below:
Operational applications such as ERP’s and CRM systems are designed for their users, but the resulting data is often needed by business analysts or managers. In order to study this data without burdening or interfering with the operational applications, this data is usually extracted from the operational applications and moved to a Data Warehouse where it can be used for managerial purposes. The Staging Area stores the raw data copied from the source systems. This raw data is then integrated into a single structure inside of an Operational Data Store (ODS). This integration process often involves cleaning the data, removing redundant data, and checking the quality of the data. The resulting integrated data is stored within the Data Warehouse. The information within the Data Warehouse represents enterprise-wide data, but its end users often only need certain pieces of this data. Consequently, customized collections of data are then moved on to Data Marts for use by various departments, primarily to feed their Business Intelligence applications.
Extract, Transform, Load (ETL): a process in which data is Extracted from a source, Transformed to meet a new need (this step can include data cleansing), and then Loaded into a target system.
ETL was INFA’s original addressable market. Prior to 2006, this technology’s use was limited to integrating on-premise data across multiple departmental databases.
Information Lifecycle Management (ILM): data usually has a lifecycle and thus declines in value over time, though the rate of the rate of this decline varies by the type of data and the organization using it. ILM is a comprehensive approach to managing data (and associated metadata), beginning with its creation and initial storage and ending when the data is no longer needed and is deleted. Within an ILM system, users create policies about how long different data types should be stored on various storage media, and the ILM system then executes these policies in an automated fashion. Newer data and data that need to be accessed more frequently are typically stored on faster, more expensive media, and less critical data is usually stored on slower, cheaper media. ILM also allows users to keep track of where different data is located within the data storage lifecycle.
In-Memory Database: a database management system that primarily stores data within the main memory (i.e. RAM) instead of on a hard drive in order to provide faster, more predictable performance.
Internet of Things (a.k.a. “The Industrial Internet”): a network of physical objects that computers can track and manage. RFID and Near Field Communication are examples of technologies that would facilitate an Internet of Things.
Java Virtual Machine (JVM): a virtual machine that can execute Java byte code regardless of the host computer’s architecture.
Java was designed to have as few implementation dependencies as possible. The goal was to enable developers to “Write Once, Run Anywhere” (WORA). This meant that code running on one platform wouldn’t need to be recompiled to run on another platform. Instead, Java applications would run on the Java Virtual Machine which was the code execution component of the Java platform.
Master Data: information that is key to a business’ operation. It can include information about customers, products, employees, materials, suppliers, etc. Importantly, even though this data is needed by many different groups of users, it is seldom centrally stored. Instead, it is replicated which creates the opportunity for inconsistencies and inaccuracies.
Master Data Management (MDM): the processes, policies and tools used to manage an organization’s Master Data. Business units often operate in silos even though they may have some customers in common. Importantly, the customer usually thinks he is working with a single company as opposed to different departments within a single corporation, and he will thus become frustrated when the different departments cannot coordinate smoothly with each other. One example would be the checking department, brokerage department, and mortgage departments of a bank. Each of these departments will use their own systems and will thus enter the customer’s information separately which can lead lead to inconsistencies and inaccuracies. MDM tools can standardize information, remove duplicate information, and use rules to prevent incorrect data from entering the system in order to create an authoritative source of Master Data for use throughout the organization.
Definition 1: Structural Metadata—the design and specification of data structures; information about the containers that the actual data is stored in.
Definition 2: Descriptive Metadata / Meta Content—this is information or descriptions about the data itself such as what language it is in, what tools were used to create it, and where to find related data.
Middleware: SW that connects two otherwise separate applications. It can generally be thought of as “glue” or “pipes” between other applications. Importantly, middleware often has a “step-child” status because while it is absolutely necessary, it is typically not embraced or developed with as much enthusiasm as the applications themselves because it is much less visible and tangible. This status probably also stems from the fact that middleware resides in-between applications.
Natural Language Processing: a field of computer science involved in helping computers to understand and respond to human’s natural language (i.e. the language people use when they communicate with each other).
Nearline Storage: an intermediate type of data storage in-between online storage which supports frequent, very fast access to data and offline storage which is used for backups or long term storage with infrequent access. An example of nearline storage is a tape library where a robot retrieves the tapes. This process takes a few seconds, but it is relatively quick. Nearline storage and archiving are a means by which to reduce the size of the online DB’s and thus improve the online DB’s performance.
NoSQL Database: a type of database that uses looser data models than relational databases. NoSQL databases may involve some structured data which has led some developers to describe them as “Not only SQL” databases.
Relational databases were introduced in the 1970’s, and their general architecture reflected the high cost of storage and limited data complexity of that time. A number of developments since then have created demand for less rigid databases including:
NoSQL databases are most often used to take advantage of of distributed computing applications that harness multiple, low-cost computers (“horizontal scaling”) instead of using a larger, more powerful single computer (“vertical scaling”). Such horizontal scaling results in more economic performance gains and better system availability. NoSQL databases tend to be most useful in situations where an extreme quantiy of information needs to be stored and retrieved AND the relationships between the data are less important.
Online Transaction Processing (OLTP): a class of systems that facilitate and manage transaction-oriented applications. In computer science, “transaction processing” refers to information processing that is divided into individual, indivisible operations called “transactions.”
Transactions: units of work executed against the database. Importantly, transactions are “all or nothing” actions, so they are either fully completed and thus change the data within the database or they somehow fail and result in zero change to the database.
Query Optimizer: A query is a request for specific information from a database, and some queries are very complex. For complex queries, there are usually many ways to access the required data, but the time required to execute them can range from one second to multiple hours. It is thus beneficial to first ascertain the most efficient path by which to access the required data. Query optimization is an automated means of finding a very efficient path to the data, though interestingly, the query optimizer may not find the absolute shortest path because doing so would itself require considerable time.
Relational Database: A database that stores information in tables that can be linked to other tables. (A spreadsheet can be used to create a relational database.) A simple example is of such a data table is shown below:
Run time: the period during which a computer program is executing. A Run-time System is SW that supports the execution of computer programs.
SAP Business Warehouse: SAP’s business intelligence, analytical, reporting, and data warehousing solution.
Software Development Kit (SDK): a set of development tools used to create applications for a given software or hardware platform.
Structured Query Language (SQL): a special-purpose programming language used to manage relational databases. SQL is the most widely used database language.
|show sort by|
Are you sure you want to close this position INFORMATICA CORP?
By closing position, I’m notifying VIC Members that at today’s market price, I no longer am recommending this position.
Are you sure you want to Flag this idea INFORMATICA CORP for removal?
Flagging an idea indicates that the idea does not meet the standards of the club and you believe it should be removed from the site. Once a threshold has been reached the idea will be removed.
You currently do not have message posting privilages, there are 1 way you can get the privilage.
Apply for or reactivate your full membership
You can apply for full membership by submitting an investment idea of your own. Or if you are in reactivation status, you need to reactivate your full membership.
What is wrong with message, "".