Click here to subscribe to our RSS feed

FYI Solutions Blog

Aug 20, 2014

Tailor Your Resume

Author: Sean Smallman

Applying for jobs can be a frustrating endeavor. More often than not, you receive no response to your application, and your resume lost in the proverbial black hole. Although there is no way to guarantee that your resume avoids this fate, one approach is certain to increase the likelihood of your resume generating a response and ultimately securing an interview. This approach is called “tailoring your resume.”

Tailoring your resume is the process of identifying the required skills and technologies in a job description and fine-tuning the description of your professional experience to accurately display your experience with those requirements. When applying to a job posting, your resume often ends up in the hands of a recruiter or human resources representative before being passed along to the hiring manager. Chances are these “gatekeepers” are non-technical and extremely busy. Tailoring your resume to the job description makes it easier for a gatekeeper to quickly identify your relevant skills.

Below are three simple steps to tailoring your resume:

Step 1 – Identify Keywords – The first step in the process of tailoring your resume is to read the job description to identify and highlight key words and technologies.

Step 2 – Create a Qualifications Summary – a Qualifications Summary is a section of your resume that gives a comprehensive yet concise snapshot of the skills and technologies that you’ve utilized throughout your career. The technologies highlighted in the previous step should be included at the top of this section to catch the eye of the recruiter.

Step 3 – Customize your Professional Experience – the Professional Experience section of your resume is your chance to elaborate on the skills and technologies listed in the Qualifications Summary, especially those relevant to the job description. Describe the context in which you’ve used these skills and technologies, any measurable success you’ve had, and the value that success has brought to each company or project.

Tailor your existing resume using these three steps when applying to each individual job posting. The key to a well-tailored resume is keeping key words and technologies at the top of each section of your resume. These skills need to be relevant to the position for which you are applying, they need to be skills you are currently using, and they need to be consistent throughout your resume. If you work towards creating a resume that is relevant, current and consistent for each position you apply for you will undoubtedly increase your chances of securing an interview.

FYI Solutions is a leader in strategic staffing. For more information about opportunities through FYI Solutions, please contact us.

Share
Aug 08, 2014

Data Virtualization Overview

Written by: Kevin Jacquier

Overview

This document has been written to provide an entry level introduction to Data Virtualization.  The majority of the information in this paper is based on information gathered during a one day class presented by Dave Wells at a TDWI Conference entitled “ TDWI Data Virtualization: Solving Complex Data Integration Challenges.”

What is Data Virtualization

There are many definitions of “Virtualization” and “Data Virtualization” that one can obtain from dictionaries, Wikipedia, and industry experts.   To keep it very simple,  “Data Virtualization”  is providing access to data directly from one or more disparate data sources, without physically moving the data, and providing it in such a manner that the technical aspects of location, structure, and access language are transparent to the Data Consumer.

What this really means is that Data Virtualization makes data available for Business Analytics, without the need to move all the information to a single physical database.   It is important to keep in mind that there are times when this is not possible directly from the source.  There are also times when it is much more efficient to use Data Virtualization directly from the source rather than replicating the information.

There is one key aspect of Data Virtualization to keep in mind.  Data Virtualization does not replace ETL; it complements it.  What this means is that Data Virtualization doesn’t work well when there needs to be significant transformations or complex business logic from the source before it can be used by the Data Consumer.  Data Virtualization works well when a Data Warehouse is its source or when the Source Data can be accessed with minimal complexity.  So why Use Data Virtualization?

Data Virtualization removes the need to move data from Data Warehouse to Data Warehouse or even Data Warehouse to Data marts, which many companies do, as this is the only way to make the data available to their applications.   Data Virtualization works very well when the source data is well defined and readily accessible for business logic.

Data Virtualization is primarily based on a Semantic Layer that creates views over the Source Data.  These views are set in layers that consist of 1) Physical Layer or Connection Views [access to the source data], 2) Business Layer or Integration Views [linking data from the different sources], and 3) Application Layer or Consumer Views [presenting data in a manner that is understandable by the Data Consumer].  There is no actual ETL with Data Virtualization; it is all just a series of Views.   This serves as both advantages as well as disadvantages of Data Virtualization.  The Data Virtualization Semantic Layer sits between the Source Data and the Delivery Applications (i.e. Business Analytics).

The diagram on the following page represents these layers, as well as the Source and Data Consumer layers.  This diagram was taken from a document that is available through “Composite Software” which is one of the leading Data Virtualization software companies.

Data virtualization overview pic 1Please note within the Diagram above, that the Data Sources Can be data in any format, including “Big Data” which could include NoSQL, Hadoop, Web Services, and Internal and external Cloud data.  In addition, there can be a multitude of different Data Consumers, including all the major “Business Analytics” vendors.

The Data Virtualization software will have an Optimizer that optimizes the Semantic Layer Query SQL generated before sending it to the actual Data Sources.  In addition, the Data Virtualization software has very hefty in memory caching.  This is the primary difference between what Data Virtualization software offers over some of the BI Vendor’s “Caching” capabilities.  Data Virtualization can keep a lot of data within its Memory; therefore, it can provide efficient response time.   Data Virtualization works very well when the Sources have High Volume data, but the Data Consumers ask for Low Volume Summaries.  In other words, data virtualization should not be used as a data extraction or data dump source, but as a Business Analytics source.

Although we may have strayed a little bit above, one of the really key aspects of Data Virtualization is to make data available and allow Data Consumers from different areas to access the data virtually, not physically.  Application Specific Consumer Views can be created within the virtual space, eliminating the need to copy (materialize) the data into another application or database.

In addition to the above mentioned features of Data Virtualization software, the following are also features found within the main Data Virtualization vendors:

  • Data Governance: Data governance functions of User Security, Data Lineage, and tracking and logging of access and use.
  • Data Quality: Data Cleansing, Quality Metadata, Monitoring, and Defect prevention.
  • Security:  User Security can be applied once in the Data Virtualization software and then any application accessing data via Data Virtualization has this security automatically applied.
  • Management Functions: Management functions of the Data Virtualization environment such as server and storage monitoring, network load monitoring, cache management, access monitoring, performance monitoring, Security Management (Domain, Group, and user levels).

The following diagram from Composite Software, shows their “Platform” or Functional Areas.

Data virtualization overview pic 2 png

Benefits of Data Virtualization

Data Virtualization allows for a very agile development environment.  The primary reason for this is due to the fact that Data Virtualization is based on the creation of Views into the data, not actual coding of Database Objects (Tables, Views, Procedures) and ETL Code as needed to support Data Materialization (data warehouse/ETL).  This allows for much quicker, cost effective development cycles over data Materialization projects.   Please note, however, that Data Virtualization isn’t always a replacement for Data Materialization, there are times when ETL is necessary and Data Virtualization isn’t the only solution.  However, Data Virtualization can still compliment the Materialized solution.

Data Virtualization provides the following Business Benefits over Data Materialization:

  • Supports Fast Prototype Development
  • Can be an interim solution to final ETL Project
  • Quicker time-to-solutions for business
  • Respond to increasing volumes and types of data
  • Increased data analysis opportunities
  • Information completeness
  • Improved information quality
  • Reduced data governance complexity
  • Better able to balance tie, resources, and results
  • Reduced Infrastructure Costs

The following Technical Benefits are obtained through Data Virtualization over Data Materialization:

  • Ease of Data Integration
  • Iterative Development
  • Shorter and Faster Development Cycles
  • Increased Developer Productivity
  • Enables Agile data integration projects
  • Works with unstructured and semi-structured data
  • Easy Access to cloud hosted data
  • Query performance Optimization
  • Less maintenance & management of integration systems
  • Complements Existing ETL data integration base
  • Extension and migration – not radical change

When to Use Data Virtualization

Data Virtualization can always be used when companies have data in disparate data sources or need to merge data with other data resources such as Social Media or External Data.  The questions really being asked here is if Data Virtualization can be applied to the Source data, or if the Source data first needs to be materialized before it can be utilized by anything, including Data Virtualization.

The following are examples of factors that make Data Virtualization a good candidate.  If too many of these factors sway in the other direction, Data Materialization may be required.

  • Time Urgency for Solution Implementation
  • Cost / Budget limitations
  • Unclear or Volatile Requirements
  • Replication Restraints
  • Risk Aversion of Organization
  • Network Uptime
  • Source System Availability
  • Quality Data Available
  • Source System Load
  • Minor Business & Data Rules
  • Availability of History in Source System
  • Data Freshness requirements
  • Small Data Query Result Sets

When not to Use Data Virtualization

The question here is more of determining if Data Materialization is required before any access to data can be performed regardless if Data Virtualization is used or not.  Data Virtualization can always complement Materialized data; it’s more of if Data Virtualization can go directly to source, or if source needs to be materialized first.

The following factors would tend to influence a project to materialize data for any type of access.

  • Low Availability of the Source Data (not available when needed for reporting) *
  • Heavy load already on source Data *
  • Poor Data Quality requiring Significant Data Cleansing
  • Complex Data Transformation Requirements
  • High Volume Result Sets for Data Consumers
  • Data Source is Multidimensional (Cubes)
  • History Not available in Source Systems

* For some of the Factors that will influence the need to Materialize data first, it should be asked if Data Replication can be a resolution to that rather than actually Materializing (ETL to Warehouse).

Data Virtualization Software

There are many Data Virtualization software vendors in the market today.  The following are a few that are leading the Market and identified by Analyst as the leaders in the Data Virtualization.  At this time, I have not researched cost nor server requirements for any of these vendors.

  • Composite Software
  • Denodo Technologies
  • IBM (InfoSphere Federation Server)
  • Informatica
  • Rocket

Many of these companies will be willing to provide a proof of concept before committing to a purchase agreement.

Summary

Data Virtualization has been around for many years, but is really becoming mainstream today.  It will however, become much more of the norm in the very near future.  It is no longer necessary to continuously copy data from location to location as different groups need data.  Data Virtualization allows an organization to make data accessible virtually to all different groups to utilize within the organization without physically storing it over and over again.   Data Virtualization will allow projects to be developed much quicker and allow them to change much faster as there is no ETL layer that requires database changes as well as code modifications.

For more information about implementing Data Virtualization in your organization, contact FYI Solutions.

 

Share
Aug 01, 2014

Tips for Growing out of BI Immaturity

Author: Jeff Busch

As I visit various clients and potential clients, and as I talk to colleges and network at events and conferences, I am amazed at the evidence I see that BI immaturity remains an issue. One would think that, by now, Business Intelligence has passed from buzzword into regular business practice and that everyone knows how to do it. This is clearly not the case. Sometimes it feels a little like we’ve given the keys to the family car to a 10 year old. However, there isn’t a lot of guidance available in public sources to help those who are just starting out in BI to do it right from the beginning. If you google “BI Immaturity” there are two results that are actually related to Business Intelligence. If you spell out “Business Intelligence” the results are only slightly better. So what’s a small business or corporate group to do?

There’s nothing wrong with immaturity, it implies that there is something to grow into. Maturity is possible, and it doesn’t require a history of failed projects to attain it. Although most fledgling BI initiatives will fail to reach their full potential, or completely fail all together, this is often not directly the fault of the software or solutions vendor. It is often due to the client asking too much from them without an understanding of what BI maturity involves. In this blog, I hope to speak to the small business or corporate group and provide a few tips to help guide these groups to start well so that they can finish well.

First of all, let’s define a few things. Since I am assuming that my audience might be just starting out in Business Intelligence it would be a good idea to explain what I mean. There are many definitions of BI, but I like to describe it as using data to produce information that provides actionable answers to business questions. This is different from data reporting. Reporting will give you a list of customers and what they bought last week. BI will show you which customers are the most profitable and what customer types are trending up or down so that you can appropriately adjust your marketing and sales practices.

When I talk about BI maturity, I am talking about the capacity to understand this connection between business questions, data, and information. Mature BI groups can look at an unanswered business question, determine what specific report requirements will help answer that question, and what data is required to fulfill those requirements. They understand that the quality of the underlying data is the key to good results and they are willing to put time and money into achieving a high level of data quality. Although many large organizations invest a lot in these initiatives, this seems to follow the typical 80/20 rule. 80% of the value comes from the first 20% of the effort; as a small business or independent corporate group, you can go far with a relatively small amount of focused attention on a few key things.

There are three things that all successful BI initiatives have in common. This list may not be comprehensive, but I think that these are the most important to focus on as you launch a new BI project. First, you need to ask the right questions. Chances are there is some business pain or business questions that have caused you to determine you need a BI solution. Many companies will start there, but when they actually launch the project, they lose focus and begin asking question like “What data do we want in the reports?” or “What is the executive dashboard going to look like?” These are important questions to ask later when actually designing the reports, but they should not be the starting point. The most successful projects have a small number of business questions clearly defined, and the entire project focuses on answering those questions. Good business questions are:

  • How efficient is our product supply chain? Where are the greatest cost inefficiencies?
  • Which are our most/least profitable customers and why?
  • Why do we lose opportunities to our competitors? Where can we improve our sales cycle?

These high-level business questions can then be broken down into more detailed business requirements such as: sales, cost and gross margin by customer, grouped by region and customer type, displayed year over year. These business requirements are critical to define early because they will determine what data is required, how it should be stored, and what tool will best meet your analysis needs.

After you have clearly defined your business questions and business requirements, you need to focus on your data. Many projects fail because little attention is paid to the data early on and it is assumed that because it looks OK in the source system it will work OK in a BI reporting system. Then, after all the work of designing reports is done, it is discovered that there are data quality issue such as missing data in required fields, inconsistent codes like 53 different state abbreviations or some names stored first last and some last first, fields used for multiple types of information, etc. All of these things can make well-built reports nearly useless to the business.

The data required by the reports needs to be clean, complete, and stored in a way that is efficient for reporting. This is not an afterthought; this should be a very high priority stage in the project and should be done before moving on to designing and building the actual reports. This also highlights the need for a dedicated place to store the data used by the BI solution. This is called a data warehouse or data mart and is where you will cleanup and process the data to get quality information from it. Although, large organization have large complex data warehouse, this isn’t required. They can be relatively small and inexpensive and you might even be able use open source database technologies like MySQL. A good solutions vendor will guide you down this path and place a high priority on getting things right at this point before moving on.

The third major element of successful BI practices is designing and building a solution, not installing a tool. Very few tools can handle all of the necessary elements of BI by themselves, and there is no tool that is equally good at all type of analysis. It is very important to design a complete solution to deliver the best results to answer your business questions and choose the right tool for the right piece of the solution. In some cases all of the tools should come from the same vendor for cross compatibility and intercommunication, in other cases individual functions should be handled by tools with a narrow focus and tools from multiple vendors might be chosen. The key is to choose the right tools for the right job, and not choose the tool and then build the solution around the tool. Another danger related to this is choosing a tool or building reports based solely on visual glitz and glamour. Good looking visuals are important, but the most important thing is to deliver actionable information in as clear a way as possible. Designing attractive visuals must support this goal, not be the goal in and of itself.

As with all things in life, BI maturity come with time and guided experience. By focusing on a few key things a small business or corporate group can quickly grow in their understanding of BI and how to use it effectively to improve their business.

For more information about getting started with business analytics, or to measure your current business intelligence maturity, contact FYI Solutions.

Share
Jul 22, 2014

6 Simple Steps to market your business through social media!

Author: Patty Ploykrachang

Social media plays a key role in our present society.  It is being used worldwide and is embedded in every corner of the web. This gives companies boundless opportunities to network with prospects, establish new relationships, and build even stronger relationships with existing clients.  You can publish informative content to such a large audience within a matter of seconds. I call that free marketing so why not use it?

Remember: the new word of mouth is now within the power of your keyboard so use it wisely!

Here are 6 tips to gain exposure:

 1. Link your social media accounts to your company website.

  •  This will drive traffic to your social media accounts and one link will promote the other.
  • This will also improve your SEO (search engine optimization) results and enhance visibility of your service or brand product.
  • *Use keywords from your website on your social updates/posting!* If you are a business analytics expert, for example, let your social media say that to the world.

 2. Network with new people/join different groups.

This will help your exposure so take the opportunity to give your company an introduction during group discussions.

  • Getting involved with online discussions will increase your social value.
  • Share your expertise with clients and prospects; be an influencer.
  • Interact by commenting, liking, sharing, tweeting, and retweeting all that is relevant to your business.
  • Be mindful when commenting. You do not want to sabotage your own success by offending anyone.  Stay upbeat! No one wants to interact with a Debbie Downer!
  • Monitor competitor and industry mentions. This is a good way to stay informed and stay up to date. It is also a good way to connect with potential followers.

 3. Post & Share informative content.

  • Present what you post back to your company’s core values.  Share your company’s accomplishments, events, and press releases.
  • Share content, but make sure to have follow-up conversation that will lead to real engagement.
  • The more engaged conversation on your post, the longer it will stay on top of the social news feed.
  • REMEMBER TO #HASHTAG! Place the prefix “#” in front of the keyword or phrase. For example:  #FYISolutions, #HowtoUseHashtag, #Blogging… This makes the word or phrase a searchable link, not to mention that it improves your SEO!

 4. Get Visual

  • Attach pictures with your post.  This will help draw in your audience to read your content.

 5. Find a balance between informative vs. excessive.

  • Be sure to publish content daily, but be careful not to publish content too often. You may lose the interest of followers.

The media site selected will influence how frequently you post. Here are some insights that help break down that process:

6. Keep track of your insights. Use their free analysis tools.

  • Most of the social media sites will show you which of your posts were most looked at and which were most engaged with. They even show you how many clicks each posting received. This gives you an idea of what type of post draws more of an audience.  This may also help with brainstorming for future postings.

Staying connected, building relationships, and establishing credibility does not develop overnight. Remember, just like anything else, you need to maintain your social media accounts.  Check your messages and give feedback. Answer questions right away. Over time, constant interaction with followers will make you the well-known professional in your industry. So remember….it’s all about networking!

 Stronger Relationships. Smarter Solutions.

FYI Solutions teams connect business goals to IT goals, building the relationships within your organization that build lasting solutions.  For more information, contact FYI Solutions.

 

 

Share
Jul 16, 2014

The Value of Early Prototyping

Author:  Kevin Jacquier

I recently joined a project that just baffled my mind with respect to the amount of time and resources that had been spent prior to any true visual representation of the requirements and functionality.  Unfortunately, my peers and I were brought in well into this investment.   It’s all great work, but it has consumed over six months of time with more than four distinct groups (Business Users, Business Analysts, Architects, Quality Assurance, and now Design Architects).   I understand the need for thorough requirements and design, but there is still a great deal of concern if the currently proposed functionality will satisfy the needs of the user community.

Allow me to step up here onto my soapbox.  You see, I am a firm believer in early prototyping, especially on projects that involve significant user interaction.

Some feel that the prototype can’t happen until all requirements have been obtained.  I feel much differently:

•    Can you prototype without 100% of the user requirements?  Absolutely.

•    Can you prototype without 100% of the architecture design in place?  Absolutely.

•    Is the prototype throwaway? Most definitely not!  A solid prototype can serve as the base for the actual project solutions.

Here are just some of the benefits of early prototyping:

•    The user gets an early understanding of how the product will look and function.  This levels expectations early, and it also shows users some additional capabilities that they may not have previously considered.   This is obviously better from both the user and development standpoints than finding that a project does not meet user expectations eight months into the project.

•    Design flaws can be detected early, not eight months down the road.

•    Rapid prototyping affords developers and business users more frequent interaction.   This iterative approach allows users to experience the look and feel as well as the proposed functionality earlier in the process, so their concerns may be expressed and addressed BEFORE such a significant development investment has been made.

•    Rapid prototyping typically increases the speed of system development.

•    Rapid prototyping assists in refining the end product.   Different aspects of requirements can be tried and tested and immediate feedback is possible form the user.

•    Better communication is enabled between the business and developers as there is clear expression of requirements and expectations.

•    True requirements for nearly all Business Intelligence (BI) projects are not fully understood until user acceptance testing (UAT) is completed or a project is put into production.  Many times users think they know what they want, but when they actually see it in action, different ideas are sparked and changes are needed.   Changes to a completed project may require changes at all layers of the architecture — database designs, ETL, BI metadata, reports, dashboards, and so on.  This will result in more time, higher cost, and higher risk of error, not to mention the reduction in its credibility.

•    It is a very painstacking task to identify business requirements, translate them to functional requirements, and then into technical designs.   In many cases, users, analysts, and designers don’t speak the same languages or they use very different terminology.  There are a lot of back and forth meetings to understand and confirm these documents.    Rapid prototyping can allow the user and designer to collaborate and actually see the requirement in effect without spending all those resources generating pages and pages of documentation.

•    Having a working prototype provides a great tool to discuss alternatives or additional requirements.  Many times it is hard to ask the right questions in meetings.   Many times it’s up to the business users to articulate what they want and if they don’t do so, the end product will not represent what their real needs are.  With prototyping, those needs are easier to see and discuss.

•    Even when business users articulate their needs perfectly for IT, and even when the IT understands the language of the business, requirements are often still described based on theoretical needs and intangible designs.  It is not until the user actually gets to use an iteration of the solution that they realize what they really need.

Prototypes provide a way for users to “kick the tires” of a solution early in the project life cycle.  This can help avoid the costly rework that may be required after UAT or Production Implementation.   This does not mean that Prototyping is easy.  There are still many layers that need to be accounted for (database Layer, ETL, BI Metadata, Reports, Dashboards), but these can be done in iterations and the initial work doesn’t have to be “Production” code.

Okay, now I am off my soapbox!

For more information about effective prototyping as well as other Business Analytics topics, please contact FYI Solutions, a leader in information management and business analytics for over thirty years.

Share
Jul 09, 2014

Cognos TM1: Three Habits to Break and Three to Make

Author: Jason Apwah

IBM TM1 training taught you a lot, and maybe you’ve learned a few things in college. But unfortunately, a number of the most important good programming habits are learned through advice and on the job experience. Here are three TM1 programming habits that you should consider breaking and three to consider making, in order to make your client (and yourself) much happier in the long run.

Habits to Break

Hardcoding Directory Paths

Hardcoding is easier, quicker and sometimes more efficient. The trade-off, however, is that hardcoding affects maintainability. Once the green light is given to begin coding, many developers are inclined to do what’s quick, and not necessarily what’s best. One very easy habit to make is to hardcode directory paths for logging and file processing. Example: There are four directory paths pointed to the C drive and 10 TIs use those paths. Let’s say two days before the production date the client discovers that there won’t be enough space on the C drive, and all logging should be done on the D drive. In this case, it would have been easier to update four lines in a control cube instead of updating at least 40 lines of code and testing the 10 TIs.

Overfeeding Cells

In your TM1 experience, you’ve probably discovered that it’s important to feed rule-calculated cells. It may also be the case that you’ve naturally started to add factors of safety everywhere in your code to minimize failure. In other words, you may be purposely overfeeding to avoid underfeeding. Remember, overfeeding will kill performance and memory. For example, you have a calculation: [‘List Price’] * [‘Units’] = [‘Revenue’]. It’s possible that your feeder: [‘Units’] => [‘Revenue’] is pushing the calculation to revenue cells even if the corresponding list price is zero. Instead, replace such a feeder with something like: [‘Units’] => DB (IF ([‘List Price’] <> 0, ‘Revenue Cube’,’’),!dim1, !dim2, !dim3,’Revenue’)

Dynamic Subsets Everywhere

Dynamic subsets can increase a TM1 application’s maintainability and increase its flexibility because they can automatically ‘refresh’ views. However, dynamic subset expressions attached to dimensions are evaluated whenever the subset is referenced by the TM1 server. Therefore, static dimensions should not have dynamic subsets that are used in views because they will reduce performance. Subsets used in views should be dynamic only when the dimension is frequently updated.

Habits to Make

Update Recorded MDX

TM1SubsetBasis() is the default expression that the MDX expression recorder uses to refer to how the dimension looked before it was modified. What happens when another TI script adds even just a single element to that dimension? TM1SubsetBasis() actually has no basis to execute the function, and the ‘updated’ subset created by that expression will be empty or at least incorrect. Do not forget to edit your recorded expressions to use something like TM1SubsetAll(), Descendants() or some other appropriate MDX function.

Comment & Format Your Code

Sometimes, seeing believes. Which is easier for you to understand?

uncommented & unformatted

commented & formatted

Naming Conventions

TM1 Architect does not allow the developer to organize objects into folders like Performance Modeler does. So it’s good to make a habit of following naming conventions for TIs so that similar types of scripts are situated near each other and thus easier to location. See the table below example TI prefixes

Case Prefix Example
Related to updating a dimension DimUpdate DimUpdate SalesRep from CSV
Related to building a dimension DimBuild DimBuild CalendarDate
Related to loading a cube CubUpdate CubUpdate FinalPnL from InitialPnL
Related to generic scripts LibProcess LibProcess Create Subset

 

So following these steps should make life easier and ultimately make your client happier! For more information about these and other TM1 best practices, contact FYI Solutions.

Share
Jun 26, 2014

Five Tips for Nailing that Phone Interview

Author: Dan Scovill

Most Hiring Managers these days usually begin their hiring process with a phone interview. It allows them to have a quick conversation (often 30 minutes or less) to determine if a candidate might be a fit before taking the time to get the whole team together for an in-person interview. Phone interviews can be a very important part of the hiring process and they should not be taken lightly. This is your chance to make a good first impression! Don’t miss the opportunity. Below are five points to keep in mind:

1.  Take the time to prepare

  • You should have an upbeat and cohesive story relating the progression of your career (credibly explaining any gaps in your experience) and the direction in which you see yourself heading.
  • Do some research on the company you are interviewing with. They will expect that you know about the company and have some questions about the position ready, if they ask.
  • Look up the profile of the manager/person you will be speaking with on LinkedIn. This way you will know his or her background and education. You could also discover a mutual colleague; perhaps this person could give you more insight on the position or who you will be speaking with.

2.  Get off on the right foot

  • As soon as you pick up the phone, start with something like this: “Good Morning, So-and-so, this is Dan. I am looking forward to speaking with you!”
  • This lets them know you are excited for the call and ready to go. It also avoids any awkward introductions.

3. Find out what they need

  • As soon as possible after the interview starts, ask “What skills are most important to you for this role?”
  • They may answer in terms of technical skills and/or ability to navigate successfully in specific environments.
  • Listen very carefully to how they answer this question, as they may tell you how you can position yourself in the interview to be most successful.
  • Specifically highlight your actual experience that is relevant to their needs.

 4. Leave no doubt

  • End the call with the question, “Do you have any hesitations that I could do this job?”
  • This will give you one last chance to present a rebuttal to any hesitations they may have.
  • This may seem forward, but you need to make sure they do not have any misunderstandings regarding your experience.
  • Assumptions are often made and you can’t afford to have the hiring manager making an incorrect assumption. You need to alleviate their hesitations BEFORE the interview is over. After the interview, once they make up their mind about you as a candidate it is very hard to change that, even if they made their decision based on incorrect information.

5. Project professionalism

  • Be respectful of the manager’s time and be concise with your answers. This is not the time for tangents. Tell them what they want to know – not what you want to tell them. Make sure you answer the question and then ask if they would like more detail. If they want more information, they will let you know.
  • Make sure you find a private and quiet space for your phone interview.
  • You should not have any interruptions. No kids laughing or dogs barking in the background.
  • Use a landline if possible. If you are using a cell phone, make sure you are in an area with perfect reception.
  • Abstain from using slang, foul language, or any terminology that might be considered inappropriate such as something sexist or in poor taste – even if your interviewer does!
  • We hope you found these tips helpful, and that by following them, you land that next job!

FYI Solutions is an IT Consultancy based out of Parsippany, NJ. For over 30 years, we have provided strategic staffing and business solutions to companies across industries such as financial services, automotive, publishing, and retail, among others. For more information about FYI Solutions, click here.

Share
Jun 19, 2014

IBM DB2 with BLU Acceleration

Author: Kevin Jacquier

IBM recently announced their new BLU Acceleration feature of IBM DB2.  BLU Stands for “Big Data, Lightning Fast, Ultra-Easy”.  This is a feature of IBM DB2 version 10.2, not a new product. Therefore, it requires minimal ramp up time to get started. Here are some of the highlights of the DB2 BLU.

Simply put, “BLU Acceleration” generates DB2 tables in a “column based” rather than the typical “row based” architecture.   There are many benefits to it, which are highlighted below.   One significant advantage that IBM DB2 has with BLU Acceleration is that it can be queried as both Row Based and Column Based tables in a single database.   There are other Column Based database products (SYBASE IQ) that only support the Column Based architecture.

When compared with the existing Row Based table architecture of IBM DB2, the BLU Acceleration feature provides amazing performance improvements, reduces disk space tremendously, supports extremely large volumes of data, and does not require indexing thus simplifying the maintenance.

I have included a list of testimonials on user adoption of this feature.  All seem to be very positive.  To date, the only negative comment I have found is that BLU Acceleration loses its power when most of the columns of a table are queried.  This is not a downside to BLU Acceleration as it still performs as good as the Row Based tables; it just loses some of its power.  Typically this would only occur when users start running massive data dumps, which they should not be doing anyway.

The IBM Redbook titled, “Leveraging DB2 10 for High Performance of Your Data warehouse” has a lot of great information regarding DB2 BLU Acceleration.   The link to a PDF version of this book is provided below:

http://www.redbooks.ibm.com/redbooks/pdfs/sg248157.pdf

Some analysts are calling IBM DB2 with BLU Acceleration to be “As Good as Hadoop for Big Data”.   The link below goes into this further:

http://davebeulke.com/ibm-blu-acceleration-best-yet-for-big-data/

Based on an IBM Press release, the following are highlights of BLU Acceleration:

  • Dynamic in-memory technology that loads terabytes of data in Random Access Memory, which streamlines query workloads even when data sets exceed the size of the memory.
  • “Actionable Compression,” which allows analytics to be performed directly on compressed data without having to decompress it. Some customers have reported as much as 10 times storage space savings.
  • An innovative advance in database technology that allows DB2 to process both row-based and column-based tables simultaneously within the same system. This allows much faster analysis of vast amounts of data for faster decision-making.
  • The simplicity to allow clients access to blazing-fast analytics transparently to their applications, without the need to develop a separate layer of data modelling or time-consuming data warehouse tuning.
  • Integration with IBM Cognos Business Intelligence Dynamic Cubes to provide breakthrough speed and simplicity for reporting and analytics. Companies can analyse key facts and freely explore more information faster from multiple angles and perspectives to make more-informed decisions.
  • The ability to take advantage of both multi-core and single instruction multiple data (SIMD) features in IBM POWER and Intel x86 processors

This is built into DB2 10.5, not a separate product, which means the time it takes to get started is incredibly fast. So it offers in-memory processing, analysis of compressed data, and simultaneous processing of columnar or row-based tables. It’s skipping data that isn’t relevant and saving space by compressing data. Very cool stuff, but just how fast is it?  The following link discusses this in detail:

http://www.mcpressonline.com/analysis-of-news-events/in-the-wheelhouse-db2-blu-acceleration-who-wants-to-go-fast.html

A demonstration of IBM DB2 with BLU Acceleration used in conjunction with IBM Cognos Dynamic Cubes is provided in the following link.

http://ibmbluhub.com/solutions/blu-cognos/

During a technology preview, IBM demonstrated that a 32-core system using BLU Acceleration could query a 10TB data set with 100 columns and 10 years of data with sub-second response time. “First we compress the data in the table by 10x resulting in a table that on disk is only 1TB in size. The query then only accesses 1 column so 1/100 of the columns in the table (1% – 10GB of 1TB). So using data skipping we can skip over 9 years and only look at 1 year (now 1GB of data). Now divide across 32 cores for the scan, each core processes only 32 MB of data. Scan will happen faster on encoded data (say 4x faster than traditional) as fast as 8MB of data on traditional system. Therefore, in the end each core is only processing 8MB of data which is no issue to get a sub-second response from.”   References in a press release had similar results, with improvements ranging from 10 to 45 times that of pre-BLU results.

If you are interested in learning more about DB2 BLU, or see opportunities for implementing it within your organization, contact FYI Solutions.  We have over 30 years of experience with business analytics and database technology.

Here are some quotes from Companies using BLU Acceleration:

The below quotes were taken from the following link – See more at:

http://www.mcpressonline.com/analysis-of-news-events/in-the-wheelhouse-db2-blu-acceleration-who-wants-to-go-fast.html#sthash.dFfH5zOs.dpuf

“When we compared the performance of column-organized tables in DB2 to our traditional row-organized tables, we found that, on average, our analytic queries were running 74x faster when using BLU Acceleration. The best outcome was a query that finished 137x faster by using BLU Acceleration.”– Kent Collins, Database Solutions Architect, BNSF Railway

We were very impressed with the performance and simplicity of BLU.  We found that some queries achieved an almost 100x speed up with literally no tuning!” – Philip Källander, Chief Technical Architect – Datawarehouse & Analytics at Handelsbanken

Wow…unbelievable speedup in query run times! We saw a speedup of 273x in our Vehicle Tracking report, taking a query from 10 minutes to 2.2 seconds. That adds value to our business; our end users are going to be ecstatic!”
- Ruel Gonzalez – Information Services, DataProxy, LLC.

“Compared to our current production system, DB2 10.5 with BLU Acceleration is running 106x faster for our Admissions and Enrollment workloads. We had one query that we would often cancel if it didn’t finish in 30 minutes. Now it runs in 56 seconds every time. 32x faster, predictable response time, no tuning…what more could we ask for?” – Brenda Boshoff, Sr. DBA, University of Toronto

10x. That’s how much smaller our tables are with BLU Acceleration. Moreover, I don’t have to create indexes or aggregates, or partition the data, among other things. When I take that into account in our mixed table-type environment, that number becomes 10-25x.” -Andrew Juarez, Lead SAP Basis and DBA, Coca-Cola Bottling Consolidated

“While expanding our initial DB2 tests with BLU Acceleration, we continued to see exceptional compression rates – our tables compressed at over 92%. But, our greatest thrill wasn’t the compression rates (though we really like it), rather the improvement we found in query speed which was more than 50X faster than with row-organized tables.” – Xu Chang, Chief DBA Support – DB2 and Oracle Databases, Mindray Medical International Ltd

With DB2 10.5 using BLU Acceleration we were able to reduce our storage requirements by over 10x compared to uncompressed tables and our query performance also improved by 10x or more. In comparison to a competitive product, DB2 10.5 with BLU Acceleration used significantly less storage and outperformed them by 3x.”
- Paul Peters, Lead Database Administrator, VSN Systemen B.V.

“When Adaptive compression was introduced in DB2 10.1, having achieved storage savings of up to 70%, I was convinced this is as good as it gets. However, with DB2 10.5 with BLU Acceleration, I have been proven wrong! Converting my row-organized, uncompressed table to a column-organized table gave me a massive 93.5% storage savings!” – Iqbal Goralwalla, Head of DB2 Managed Services, Triton

The BLU Acceleration technology has some obvious benefits: It makes our analytical queries run 4-15x faster and decreases the size of our tables by a factor of 10x. But it’s when I think about all the things I don’t have to do with BLU, it made me appreciate the technology even more: no tuning, no partitioning, no indexes, no aggregates.” -Andrew Juarez, Lead SAP Basis and DBA, Coca-cola Bottling Consolidated 

DB2 BLU Acceleration is all it says it is. Simplicity at its best, the “Load and Go!” tagline is all true. We didn’t have to change any of our SQL, it was very simple to setup, and extremely easy to use. Not only did we get amazing performance gains and storage savings, but this was achieved without extra effort on our part.” - Ruel Gonzalez – Information Services, DataProxy LLC.

FYI Solutions is an IBM Premier Partner in Business Analytics.  Feel free to contact us for more information.

Share
Jun 10, 2014

Applying Predictive Analytics to Electronic Health Records

Author:  Joan Frick

With all the hype about “big data” and “predictive analytics” lately, I am surprised that more companies are not embracing emerging technologies that can take them to the next level.  There is so much data that can be leveraged to make informed decisions by predicting outcomes.  In order to take advantage of these technologies, it is important that the key information is accessible and organized to support predictive algorithms.  This article, published by Forbes.com, is a perfect example of how such methodology can be used to detect potential diseases in high risk patients, thus positioning them for early intervention that could ultimately save lives.  What could be more powerful than saving lives?

IBM gathered three years’ worth of data belonging to 350,000 patients. In addition to more than 200 factors such as blood pressure, beta blocker prescriptions, and weight, it combed through more than 20 million notes, uncovering nuggets of information that are not entered in a medical record’s fields. They include the number of cigarette packs a patient smokes, the pattern of prescriptions, and how well the heart is pumping. Additional details that might have escaped a doctor’s eye include a patient’s social history, depression, and living arrangements.

Predictive algorithms uncovered 8,500 patients at risk of having heart failure within a year; 3,500 were ferreted out because of natural language technology.

 

 

 

 

 

 

 

 

 

 

To read the full article, click here: http://www.forbes.com/sites/zinamoukheiber/2014/02/19/ibm-and-epic-apply-predictive-analytics-to-electronic-health-records/  

FYI Solutions, an IBM Premier Partner in Business Analytics, can assist you with determining how your organization can use information to predict outcomes that are relevant to you.  Contact us for more information.

Share
May 21, 2014

IBM Business Analytics Summit

Author: Albert Stark

FYI Solutions was a sponsor of the recent IBM Business Analytics Summit in Morristown, N.J. IBM presented upcoming BI products including the IBM Watson Analytics Beta and the IBM Rapidly Adaptive Visualization Engine (RAVE).

I liked Watson Analytics. It very quickly takes a set of clean data and presents correlations, in order of most likely cause & effect. For example, if you have a set of consumer data showing online, in store and other purchases, Watson Analytics will show the consumer characteristics that are most likely to generate a purchase, in descending order of likelihood. It was nice to see the correlations in a crisp, clean format. While the same analytics have been around for a long time in other statistics packages, I expect this will be built into/integrated with IBM’s high-end BI tools, so it will be much easier to perform the analysis.

As a researcher, I was hoping to see more. For example, second-order correlations and multi-factor correlations such as:

•  Are online or in-store consumers with a coupon much more likely to buy?
•  If the consumer first went online and then came in-store, how does this impact likelihood of purchase?
•  What causes which consumers to buy a lot of the most high-profit items?

We need to know more; human behavior is not usually a direct 1-1 cause-effect. Brainstorming is much more fun if we have the underlying information, not just the data. Then, once we understand cause-and-effect, we can apply Predictive Analytics: if we take this action (e.g., price discount or coupon), how will this impact sales, profit, inventory, etc.?

I was hoping the Summit would address the processes, strategy or technology to collect, manage and disseminate the right data in a manner that is easy to consume. Pockets within corporations are doing this but how do you enable a Corporate Analytics Capability? FYI Solutions has several projects where we are helping our clients advance Business Analytics at a corporate level, by planning, and building out data warehouses, data marts, frameworks, and BI reporting. Most of this needs to be in place to leverage tools such as Watson Analytics.

So let’s say we build the infrastructure, processes and capability to run some interesting analytics with even more interesting data. How do we best present this data so that it is easy to consume and enables decision and action? This is where I see RAVE heading. Very creative, artistic, and intelligent people have created a myriad of formats to present data. Here are some examples that IBM presented:

Picture 4

RAVE enables IBM to apply and extend many visualization libraries (heat maps, scatter charts, spider charts, etc.) to their products, including Cognos. Even better, new visualizations can be submitted to the “Extensible Visualization Community” by the technical artists and then downloaded to include in BI reports. Nice. Much better than yet another pie chart for making my key message point!

In addition, by presenting the data visually in a couple of ways, maybe I, as a human, will gain an unexpected insight, leading me to a “Eureka!” moment. For example, while I’m applying Predictive Analytics, can I see which will yield the “best” result? (Given the Chief Marketing Officer’s interest in market share, time to execute, competitive position, etc.) Did someone say, “Prescriptive Analytics”?

As a past scientist at Bell Labs and Xerox Research working on statistical analysis and artificial intelligence (application of Predictive and Prescriptive Analytics to build a self-learning system), it should be relatively easy for the RAVE or BI engine to:

•  Look at the data

•  Assess the visualizations in terms of …

o  What I’ve done in the past
o  How others have presented similar data in similar context

•  Then recommend the top two or three visualizations.

Applied corporate anthropology and AI. Sweet.

If I am lucky, IBM is listening in to my dream and will call FYI Solutions in the morning to make this a reality. I had better go see if we have good clean data on this, readily available, and figure out how to best visualize the story.

For more information about the IBM Business Analytics Summit, or to get started on your Business Analytics journey, contact FYI Solutions.

Share