FYI Solutions Blog

Jul 22, 2014

6 Simple Steps to market your business through social media!

Author: Patty Ploykrachang

Social media plays a key role in our present society.  It is being used worldwide and is embedded in every corner of the web. This gives companies boundless opportunities to network with prospects, establish new relationships, and build even stronger relationships with existing clients.  You can publish informative content to such a large audience within a matter of seconds. I call that free marketing so why not use it?

Remember: the new word of mouth is now within the power of your keyboard so use it wisely!

Here are 6 tips to gain exposure:

 1. Link your social media accounts to your company website.

  •  This will drive traffic to your social media accounts and one link will promote the other.
  • This will also improve your SEO (search engine optimization) results and enhance visibility of your service or brand product.
  • *Use keywords from your website on your social updates/posting!* If you are a business analytics expert, for example, let your social media say that to the world.

 2. Network with new people/join different groups.

This will help your exposure so take the opportunity to give your company an introduction during group discussions.

  • Getting involved with online discussions will increase your social value.
  • Share your expertise with clients and prospects; be an influencer.
  • Interact by commenting, liking, sharing, tweeting, and retweeting all that is relevant to your business.
  • Be mindful when commenting. You do not want to sabotage your own success by offending anyone.  Stay upbeat! No one wants to interact with a Debbie Downer!
  • Monitor competitor and industry mentions. This is a good way to stay informed and stay up to date. It is also a good way to connect with potential followers.

 3. Post & Share informative content.

  • Present what you post back to your company’s core values.  Share your company’s accomplishments, events, and press releases.
  • Share content, but make sure to have follow-up conversation that will lead to real engagement.
  • The more engaged conversation on your post, the longer it will stay on top of the social news feed.
  • REMEMBER TO #HASHTAG! Place the prefix “#” in front of the keyword or phrase. For example:  #FYISolutions, #HowtoUseHashtag, #Blogging… This makes the word or phrase a searchable link, not to mention that it improves your SEO!

 4. Get Visual

  • Attach pictures with your post.  This will help draw in your audience to read your content.

 5. Find a balance between informative vs. excessive.

  • Be sure to publish content daily, but be careful not to publish content too often. You may lose the interest of followers.

The media site selected will influence how frequently you post. Here are some insights that help break down that process:

6. Keep track of your insights. Use their free analysis tools.

  • Most of the social media sites will show you which of your posts were most looked at and which were most engaged with. They even show you how many clicks each posting received. This gives you an idea of what type of post draws more of an audience.  This may also help with brainstorming for future postings.

Staying connected, building relationships, and establishing credibility does not develop overnight. Remember, just like anything else, you need to maintain your social media accounts.  Check your messages and give feedback. Answer questions right away. Over time, constant interaction with followers will make you the well-known professional in your industry. So remember….it’s all about networking!

 Stronger Relationships. Smarter Solutions.

FYI Solutions teams connect business goals to IT goals, building the relationships within your organization that build lasting solutions.  For more information, contact FYI Solutions.

 

 

Share
Jul 16, 2014

The Value of Early Prototyping

Author:  Kevin Jacquier

I recently joined a project that just baffled my mind with respect to the amount of time and resources that had been spent prior to any true visual representation of the requirements and functionality.  Unfortunately, my peers and I were brought in well into this investment.   It’s all great work, but it has consumed over six months of time with more than four distinct groups (Business Users, Business Analysts, Architects, Quality Assurance, and now Design Architects).   I understand the need for thorough requirements and design, but there is still a great deal of concern if the currently proposed functionality will satisfy the needs of the user community.

Allow me to step up here onto my soapbox.  You see, I am a firm believer in early prototyping, especially on projects that involve significant user interaction.

Some feel that the prototype can’t happen until all requirements have been obtained.  I feel much differently:

•    Can you prototype without 100% of the user requirements?  Absolutely.

•    Can you prototype without 100% of the architecture design in place?  Absolutely.

•    Is the prototype throwaway? Most definitely not!  A solid prototype can serve as the base for the actual project solutions.

Here are just some of the benefits of early prototyping:

•    The user gets an early understanding of how the product will look and function.  This levels expectations early, and it also shows users some additional capabilities that they may not have previously considered.   This is obviously better from both the user and development standpoints than finding that a project does not meet user expectations eight months into the project.

•    Design flaws can be detected early, not eight months down the road.

•    Rapid prototyping affords developers and business users more frequent interaction.   This iterative approach allows users to experience the look and feel as well as the proposed functionality earlier in the process, so their concerns may be expressed and addressed BEFORE such a significant development investment has been made.

•    Rapid prototyping typically increases the speed of system development.

•    Rapid prototyping assists in refining the end product.   Different aspects of requirements can be tried and tested and immediate feedback is possible form the user.

•    Better communication is enabled between the business and developers as there is clear expression of requirements and expectations.

•    True requirements for nearly all Business Intelligence (BI) projects are not fully understood until user acceptance testing (UAT) is completed or a project is put into production.  Many times users think they know what they want, but when they actually see it in action, different ideas are sparked and changes are needed.   Changes to a completed project may require changes at all layers of the architecture — database designs, ETL, BI metadata, reports, dashboards, and so on.  This will result in more time, higher cost, and higher risk of error, not to mention the reduction in its credibility.

•    It is a very painstacking task to identify business requirements, translate them to functional requirements, and then into technical designs.   In many cases, users, analysts, and designers don’t speak the same languages or they use very different terminology.  There are a lot of back and forth meetings to understand and confirm these documents.    Rapid prototyping can allow the user and designer to collaborate and actually see the requirement in effect without spending all those resources generating pages and pages of documentation.

•    Having a working prototype provides a great tool to discuss alternatives or additional requirements.  Many times it is hard to ask the right questions in meetings.   Many times it’s up to the business users to articulate what they want and if they don’t do so, the end product will not represent what their real needs are.  With prototyping, those needs are easier to see and discuss.

•    Even when business users articulate their needs perfectly for IT, and even when the IT understands the language of the business, requirements are often still described based on theoretical needs and intangible designs.  It is not until the user actually gets to use an iteration of the solution that they realize what they really need.

Prototypes provide a way for users to “kick the tires” of a solution early in the project life cycle.  This can help avoid the costly rework that may be required after UAT or Production Implementation.   This does not mean that Prototyping is easy.  There are still many layers that need to be accounted for (database Layer, ETL, BI Metadata, Reports, Dashboards), but these can be done in iterations and the initial work doesn’t have to be “Production” code.

Okay, now I am off my soapbox!

For more information about effective prototyping as well as other Business Analytics topics, please contact FYI Solutions, a leader in information management and business analytics for over thirty years.

Share
Jul 09, 2014

Cognos TM1: Three Habits to Break and Three to Make

Author: Jason Apwah

IBM TM1 training taught you a lot, and maybe you’ve learned a few things in college. But unfortunately, a number of the most important good programming habits are learned through advice and on the job experience. Here are three TM1 programming habits that you should consider breaking and three to consider making, in order to make your client (and yourself) much happier in the long run.

Habits to Break

Hardcoding Directory Paths

Hardcoding is easier, quicker and sometimes more efficient. The trade-off, however, is that hardcoding affects maintainability. Once the green light is given to begin coding, many developers are inclined to do what’s quick, and not necessarily what’s best. One very easy habit to make is to hardcode directory paths for logging and file processing. Example: There are four directory paths pointed to the C drive and 10 TIs use those paths. Let’s say two days before the production date the client discovers that there won’t be enough space on the C drive, and all logging should be done on the D drive. In this case, it would have been easier to update four lines in a control cube instead of updating at least 40 lines of code and testing the 10 TIs.

Overfeeding Cells

In your TM1 experience, you’ve probably discovered that it’s important to feed rule-calculated cells. It may also be the case that you’ve naturally started to add factors of safety everywhere in your code to minimize failure. In other words, you may be purposely overfeeding to avoid underfeeding. Remember, overfeeding will kill performance and memory. For example, you have a calculation: [‘List Price’] * [‘Units’] = [‘Revenue’]. It’s possible that your feeder: [‘Units’] => [‘Revenue’] is pushing the calculation to revenue cells even if the corresponding list price is zero. Instead, replace such a feeder with something like: [‘Units’] => DB (IF ([‘List Price’] <> 0, ‘Revenue Cube’,’’),!dim1, !dim2, !dim3,’Revenue’)

Dynamic Subsets Everywhere

Dynamic subsets can increase a TM1 application’s maintainability and increase its flexibility because they can automatically ‘refresh’ views. However, dynamic subset expressions attached to dimensions are evaluated whenever the subset is referenced by the TM1 server. Therefore, static dimensions should not have dynamic subsets that are used in views because they will reduce performance. Subsets used in views should be dynamic only when the dimension is frequently updated.

Habits to Make

Update Recorded MDX

TM1SubsetBasis() is the default expression that the MDX expression recorder uses to refer to how the dimension looked before it was modified. What happens when another TI script adds even just a single element to that dimension? TM1SubsetBasis() actually has no basis to execute the function, and the ‘updated’ subset created by that expression will be empty or at least incorrect. Do not forget to edit your recorded expressions to use something like TM1SubsetAll(), Descendants() or some other appropriate MDX function.

Comment & Format Your Code

Sometimes, seeing believes. Which is easier for you to understand?

uncommented & unformatted

commented & formatted

Naming Conventions

TM1 Architect does not allow the developer to organize objects into folders like Performance Modeler does. So it’s good to make a habit of following naming conventions for TIs so that similar types of scripts are situated near each other and thus easier to location. See the table below example TI prefixes

Case Prefix Example
Related to updating a dimension DimUpdate DimUpdate SalesRep from CSV
Related to building a dimension DimBuild DimBuild CalendarDate
Related to loading a cube CubUpdate CubUpdate FinalPnL from InitialPnL
Related to generic scripts LibProcess LibProcess Create Subset

 

So following these steps should make life easier and ultimately make your client happier! For more information about these and other TM1 best practices, contact FYI Solutions.

Share
Jun 26, 2014

Five Tips for Nailing that Phone Interview

Author: Dan Scovill

Most Hiring Managers these days usually begin their hiring process with a phone interview. It allows them to have a quick conversation (often 30 minutes or less) to determine if a candidate might be a fit before taking the time to get the whole team together for an in-person interview. Phone interviews can be a very important part of the hiring process and they should not be taken lightly. This is your chance to make a good first impression! Don’t miss the opportunity. Below are five points to keep in mind:

1.  Take the time to prepare

  • You should have an upbeat and cohesive story relating the progression of your career (credibly explaining any gaps in your experience) and the direction in which you see yourself heading.
  • Do some research on the company you are interviewing with. They will expect that you know about the company and have some questions about the position ready, if they ask.
  • Look up the profile of the manager/person you will be speaking with on LinkedIn. This way you will know his or her background and education. You could also discover a mutual colleague; perhaps this person could give you more insight on the position or who you will be speaking with.

2.  Get off on the right foot

  • As soon as you pick up the phone, start with something like this: “Good Morning, So-and-so, this is Dan. I am looking forward to speaking with you!”
  • This lets them know you are excited for the call and ready to go. It also avoids any awkward introductions.

3. Find out what they need

  • As soon as possible after the interview starts, ask “What skills are most important to you for this role?”
  • They may answer in terms of technical skills and/or ability to navigate successfully in specific environments.
  • Listen very carefully to how they answer this question, as they may tell you how you can position yourself in the interview to be most successful.
  • Specifically highlight your actual experience that is relevant to their needs.

 4. Leave no doubt

  • End the call with the question, “Do you have any hesitations that I could do this job?”
  • This will give you one last chance to present a rebuttal to any hesitations they may have.
  • This may seem forward, but you need to make sure they do not have any misunderstandings regarding your experience.
  • Assumptions are often made and you can’t afford to have the hiring manager making an incorrect assumption. You need to alleviate their hesitations BEFORE the interview is over. After the interview, once they make up their mind about you as a candidate it is very hard to change that, even if they made their decision based on incorrect information.

5. Project professionalism

  • Be respectful of the manager’s time and be concise with your answers. This is not the time for tangents. Tell them what they want to know – not what you want to tell them. Make sure you answer the question and then ask if they would like more detail. If they want more information, they will let you know.
  • Make sure you find a private and quiet space for your phone interview.
  • You should not have any interruptions. No kids laughing or dogs barking in the background.
  • Use a landline if possible. If you are using a cell phone, make sure you are in an area with perfect reception.
  • Abstain from using slang, foul language, or any terminology that might be considered inappropriate such as something sexist or in poor taste – even if your interviewer does!
  • We hope you found these tips helpful, and that by following them, you land that next job!

FYI Solutions is an IT Consultancy based out of Parsippany, NJ. For over 30 years, we have provided strategic staffing and business solutions to companies across industries such as financial services, automotive, publishing, and retail, among others. For more information about FYI Solutions, click here.

Share
Jun 19, 2014

IBM DB2 with BLU Acceleration

Author: Kevin Jacquier

IBM recently announced their new BLU Acceleration feature of IBM DB2.  BLU Stands for “Big Data, Lightning Fast, Ultra-Easy”.  This is a feature of IBM DB2 version 10.2, not a new product. Therefore, it requires minimal ramp up time to get started. Here are some of the highlights of the DB2 BLU.

Simply put, “BLU Acceleration” generates DB2 tables in a “column based” rather than the typical “row based” architecture.   There are many benefits to it, which are highlighted below.   One significant advantage that IBM DB2 has with BLU Acceleration is that it can be queried as both Row Based and Column Based tables in a single database.   There are other Column Based database products (SYBASE IQ) that only support the Column Based architecture.

When compared with the existing Row Based table architecture of IBM DB2, the BLU Acceleration feature provides amazing performance improvements, reduces disk space tremendously, supports extremely large volumes of data, and does not require indexing thus simplifying the maintenance.

I have included a list of testimonials on user adoption of this feature.  All seem to be very positive.  To date, the only negative comment I have found is that BLU Acceleration loses its power when most of the columns of a table are queried.  This is not a downside to BLU Acceleration as it still performs as good as the Row Based tables; it just loses some of its power.  Typically this would only occur when users start running massive data dumps, which they should not be doing anyway.

The IBM Redbook titled, “Leveraging DB2 10 for High Performance of Your Data warehouse” has a lot of great information regarding DB2 BLU Acceleration.   The link to a PDF version of this book is provided below:

http://www.redbooks.ibm.com/redbooks/pdfs/sg248157.pdf

Some analysts are calling IBM DB2 with BLU Acceleration to be “As Good as Hadoop for Big Data”.   The link below goes into this further:

http://davebeulke.com/ibm-blu-acceleration-best-yet-for-big-data/

Based on an IBM Press release, the following are highlights of BLU Acceleration:

  • Dynamic in-memory technology that loads terabytes of data in Random Access Memory, which streamlines query workloads even when data sets exceed the size of the memory.
  • “Actionable Compression,” which allows analytics to be performed directly on compressed data without having to decompress it. Some customers have reported as much as 10 times storage space savings.
  • An innovative advance in database technology that allows DB2 to process both row-based and column-based tables simultaneously within the same system. This allows much faster analysis of vast amounts of data for faster decision-making.
  • The simplicity to allow clients access to blazing-fast analytics transparently to their applications, without the need to develop a separate layer of data modelling or time-consuming data warehouse tuning.
  • Integration with IBM Cognos Business Intelligence Dynamic Cubes to provide breakthrough speed and simplicity for reporting and analytics. Companies can analyse key facts and freely explore more information faster from multiple angles and perspectives to make more-informed decisions.
  • The ability to take advantage of both multi-core and single instruction multiple data (SIMD) features in IBM POWER and Intel x86 processors

This is built into DB2 10.5, not a separate product, which means the time it takes to get started is incredibly fast. So it offers in-memory processing, analysis of compressed data, and simultaneous processing of columnar or row-based tables. It’s skipping data that isn’t relevant and saving space by compressing data. Very cool stuff, but just how fast is it?  The following link discusses this in detail:

http://www.mcpressonline.com/analysis-of-news-events/in-the-wheelhouse-db2-blu-acceleration-who-wants-to-go-fast.html

A demonstration of IBM DB2 with BLU Acceleration used in conjunction with IBM Cognos Dynamic Cubes is provided in the following link.

http://ibmbluhub.com/solutions/blu-cognos/

During a technology preview, IBM demonstrated that a 32-core system using BLU Acceleration could query a 10TB data set with 100 columns and 10 years of data with sub-second response time. “First we compress the data in the table by 10x resulting in a table that on disk is only 1TB in size. The query then only accesses 1 column so 1/100 of the columns in the table (1% – 10GB of 1TB). So using data skipping we can skip over 9 years and only look at 1 year (now 1GB of data). Now divide across 32 cores for the scan, each core processes only 32 MB of data. Scan will happen faster on encoded data (say 4x faster than traditional) as fast as 8MB of data on traditional system. Therefore, in the end each core is only processing 8MB of data which is no issue to get a sub-second response from.”   References in a press release had similar results, with improvements ranging from 10 to 45 times that of pre-BLU results.

If you are interested in learning more about DB2 BLU, or see opportunities for implementing it within your organization, contact FYI Solutions.  We have over 30 years of experience with business analytics and database technology.

Here are some quotes from Companies using BLU Acceleration:

The below quotes were taken from the following link – See more at:

http://www.mcpressonline.com/analysis-of-news-events/in-the-wheelhouse-db2-blu-acceleration-who-wants-to-go-fast.html#sthash.dFfH5zOs.dpuf

“When we compared the performance of column-organized tables in DB2 to our traditional row-organized tables, we found that, on average, our analytic queries were running 74x faster when using BLU Acceleration. The best outcome was a query that finished 137x faster by using BLU Acceleration.”– Kent Collins, Database Solutions Architect, BNSF Railway

We were very impressed with the performance and simplicity of BLU.  We found that some queries achieved an almost 100x speed up with literally no tuning!” – Philip Källander, Chief Technical Architect – Datawarehouse & Analytics at Handelsbanken

Wow…unbelievable speedup in query run times! We saw a speedup of 273x in our Vehicle Tracking report, taking a query from 10 minutes to 2.2 seconds. That adds value to our business; our end users are going to be ecstatic!”
- Ruel Gonzalez – Information Services, DataProxy, LLC.

“Compared to our current production system, DB2 10.5 with BLU Acceleration is running 106x faster for our Admissions and Enrollment workloads. We had one query that we would often cancel if it didn’t finish in 30 minutes. Now it runs in 56 seconds every time. 32x faster, predictable response time, no tuning…what more could we ask for?” – Brenda Boshoff, Sr. DBA, University of Toronto

10x. That’s how much smaller our tables are with BLU Acceleration. Moreover, I don’t have to create indexes or aggregates, or partition the data, among other things. When I take that into account in our mixed table-type environment, that number becomes 10-25x.” -Andrew Juarez, Lead SAP Basis and DBA, Coca-Cola Bottling Consolidated

“While expanding our initial DB2 tests with BLU Acceleration, we continued to see exceptional compression rates – our tables compressed at over 92%. But, our greatest thrill wasn’t the compression rates (though we really like it), rather the improvement we found in query speed which was more than 50X faster than with row-organized tables.” – Xu Chang, Chief DBA Support – DB2 and Oracle Databases, Mindray Medical International Ltd

With DB2 10.5 using BLU Acceleration we were able to reduce our storage requirements by over 10x compared to uncompressed tables and our query performance also improved by 10x or more. In comparison to a competitive product, DB2 10.5 with BLU Acceleration used significantly less storage and outperformed them by 3x.”
- Paul Peters, Lead Database Administrator, VSN Systemen B.V.

“When Adaptive compression was introduced in DB2 10.1, having achieved storage savings of up to 70%, I was convinced this is as good as it gets. However, with DB2 10.5 with BLU Acceleration, I have been proven wrong! Converting my row-organized, uncompressed table to a column-organized table gave me a massive 93.5% storage savings!” – Iqbal Goralwalla, Head of DB2 Managed Services, Triton

The BLU Acceleration technology has some obvious benefits: It makes our analytical queries run 4-15x faster and decreases the size of our tables by a factor of 10x. But it’s when I think about all the things I don’t have to do with BLU, it made me appreciate the technology even more: no tuning, no partitioning, no indexes, no aggregates.” -Andrew Juarez, Lead SAP Basis and DBA, Coca-cola Bottling Consolidated 

DB2 BLU Acceleration is all it says it is. Simplicity at its best, the “Load and Go!” tagline is all true. We didn’t have to change any of our SQL, it was very simple to setup, and extremely easy to use. Not only did we get amazing performance gains and storage savings, but this was achieved without extra effort on our part.” - Ruel Gonzalez – Information Services, DataProxy LLC.

FYI Solutions is an IBM Premier Partner in Business Analytics.  Feel free to contact us for more information.

Share
Jun 10, 2014

Applying Predictive Analytics to Electronic Health Records

Author:  Joan Frick

With all the hype about “big data” and “predictive analytics” lately, I am surprised that more companies are not embracing emerging technologies that can take them to the next level.  There is so much data that can be leveraged to make informed decisions by predicting outcomes.  In order to take advantage of these technologies, it is important that the key information is accessible and organized to support predictive algorithms.  This article, published by Forbes.com, is a perfect example of how such methodology can be used to detect potential diseases in high risk patients, thus positioning them for early intervention that could ultimately save lives.  What could be more powerful than saving lives?

IBM gathered three years’ worth of data belonging to 350,000 patients. In addition to more than 200 factors such as blood pressure, beta blocker prescriptions, and weight, it combed through more than 20 million notes, uncovering nuggets of information that are not entered in a medical record’s fields. They include the number of cigarette packs a patient smokes, the pattern of prescriptions, and how well the heart is pumping. Additional details that might have escaped a doctor’s eye include a patient’s social history, depression, and living arrangements.

Predictive algorithms uncovered 8,500 patients at risk of having heart failure within a year; 3,500 were ferreted out because of natural language technology.

 

 

 

 

 

 

 

 

 

 

To read the full article, click here: http://www.forbes.com/sites/zinamoukheiber/2014/02/19/ibm-and-epic-apply-predictive-analytics-to-electronic-health-records/  

FYI Solutions, an IBM Premier Partner in Business Analytics, can assist you with determining how your organization can use information to predict outcomes that are relevant to you.  Contact us for more information.

Share
May 21, 2014

IBM Business Analytics Summit

Author: Albert Stark

FYI Solutions was a sponsor of the recent IBM Business Analytics Summit in Morristown, N.J. IBM presented upcoming BI products including the IBM Watson Analytics Beta and the IBM Rapidly Adaptive Visualization Engine (RAVE).

I liked Watson Analytics. It very quickly takes a set of clean data and presents correlations, in order of most likely cause & effect. For example, if you have a set of consumer data showing online, in store and other purchases, Watson Analytics will show the consumer characteristics that are most likely to generate a purchase, in descending order of likelihood. It was nice to see the correlations in a crisp, clean format. While the same analytics have been around for a long time in other statistics packages, I expect this will be built into/integrated with IBM’s high-end BI tools, so it will be much easier to perform the analysis.

As a researcher, I was hoping to see more. For example, second-order correlations and multi-factor correlations such as:

•  Are online or in-store consumers with a coupon much more likely to buy?
•  If the consumer first went online and then came in-store, how does this impact likelihood of purchase?
•  What causes which consumers to buy a lot of the most high-profit items?

We need to know more; human behavior is not usually a direct 1-1 cause-effect. Brainstorming is much more fun if we have the underlying information, not just the data. Then, once we understand cause-and-effect, we can apply Predictive Analytics: if we take this action (e.g., price discount or coupon), how will this impact sales, profit, inventory, etc.?

I was hoping the Summit would address the processes, strategy or technology to collect, manage and disseminate the right data in a manner that is easy to consume. Pockets within corporations are doing this but how do you enable a Corporate Analytics Capability? FYI Solutions has several projects where we are helping our clients advance Business Analytics at a corporate level, by planning, and building out data warehouses, data marts, frameworks, and BI reporting. Most of this needs to be in place to leverage tools such as Watson Analytics.

So let’s say we build the infrastructure, processes and capability to run some interesting analytics with even more interesting data. How do we best present this data so that it is easy to consume and enables decision and action? This is where I see RAVE heading. Very creative, artistic, and intelligent people have created a myriad of formats to present data. Here are some examples that IBM presented:

Picture 4

RAVE enables IBM to apply and extend many visualization libraries (heat maps, scatter charts, spider charts, etc.) to their products, including Cognos. Even better, new visualizations can be submitted to the “Extensible Visualization Community” by the technical artists and then downloaded to include in BI reports. Nice. Much better than yet another pie chart for making my key message point!

In addition, by presenting the data visually in a couple of ways, maybe I, as a human, will gain an unexpected insight, leading me to a “Eureka!” moment. For example, while I’m applying Predictive Analytics, can I see which will yield the “best” result? (Given the Chief Marketing Officer’s interest in market share, time to execute, competitive position, etc.) Did someone say, “Prescriptive Analytics”?

As a past scientist at Bell Labs and Xerox Research working on statistical analysis and artificial intelligence (application of Predictive and Prescriptive Analytics to build a self-learning system), it should be relatively easy for the RAVE or BI engine to:

•  Look at the data

•  Assess the visualizations in terms of …

o  What I’ve done in the past
o  How others have presented similar data in similar context

•  Then recommend the top two or three visualizations.

Applied corporate anthropology and AI. Sweet.

If I am lucky, IBM is listening in to my dream and will call FYI Solutions in the morning to make this a reality. I had better go see if we have good clean data on this, readily available, and figure out how to best visualize the story.

For more information about the IBM Business Analytics Summit, or to get started on your Business Analytics journey, contact FYI Solutions.

Share
May 14, 2014

Data are features that matter too.

Author: Narisha Maduray

“Although all vendors are seeking to achieve the ultimate goal of providing both engaging business-user-oriented capability and enterprise features that enable IT governance and control, none has yet achieved this. Bridging this chasm will be crucial to achieving leading market share, mind share and momentum in the BI market”

Gartner Magic Quadrant 2014 for Business Intelligence and Analytics Platforms 

Gartner highlights a big gap that cannot be ignored in an age of exponentially increasing data sources.

One of the key assets of a BI project is the data itself and one of the key resources instrumental in closing the rift between usability and governance is the data steward.

Let’s explore the role and responsibilities of a data steward as it pertains to quality, accountability and the broader enterprise.

The blog references a popular Dilbert strip to focus this governance tale and make it a trifle more engaging and memorable. Check out the strip before reading the article for maximum effect.

1. Quality rather than quantity matters
2. Accountability in data stewardship matters
3. Looking outside of the box matters

1. Quality rather than quantity matters.

Quality Rather than quantity mattersWhen interviewing candidates to join your organization, the selection criteria are clearly defined. When it comes to data stewards, it is no different. Only a special set of abilities will arm the data steward to turn data into enterprise assets, not just locally, but for the broader range of business users.

a. They know the data, the measures, dimensions and business context.

b. They understand the enterprise information strategy and can balance servicing short-term business needs with those business needs that will be addressed in the long term.

c. They are well versed with the technology to sift through the volumes of suggestions from business consumers and determine the right analytics for the right product feature.

 

2. Accountability in data stewardship matters.

“In the history of the world, no one has ever washed a rented car” says Lawrence Summers, an economist and former President of Harvard.

Is an Agile approach to user acceptance testing the answer to swiftly removing features that were not aligned with the latest requirements of the product?

 

pic2

Two goals of Agile BI:

a. Technology must support evolving business needs.

b. Governance must identify the proper data assets to ensure data reliability over time.

In the gambit of information governance, people who have a stake in the decision making process manage data assets and business processes.

 

Two implications of Agile BI for the IT department:

a. Governance is not a role for IT to fulfill. Sure, IT manages the information architectures and helps establish the support structures required by Agile BI.

b. The value proposition of each data entity within the organization is not under the IT umbrella.

 

3. Looking outside of the box matters.

pic3

An enterprise perspective coupled with awareness of business politics is imperative. Breaking down functional silos and collaborating with other data stewards across other business units will assist the data steward to consider broader implications. This will ensure that decisions concerning definitions, business rules, quality expectations and data use also consider the full lifecycle of the data.

A team of data stewards provides a center of excellence for strengthening the required skills, promoting collaboration among the data stewards and embedding them into the culture of the organization.

Successful enterprise information management programs use data stewards. When it comes to managing data as an enterprise asset – data stewards are the resources to make it happen. Embrace data stewards who demonstrate technical and interpersonal skills. This dynamic combination will accelerate self-service adoption and foster trust through integrity in the data.

For more information regarding data stewardship and Agile BI, contact FYI Solutions.

Share
May 08, 2014

Jobseeker: What is the Best Way to Use LinkedIn’s Anonymous View setting?

Author: Gregg Ruoti

 

I am often surprised by the fact that so many LinkedIn members are unaware that they have the ability to make their viewing of other members completely anonymous.  When I tell clients this, more often than not they haven’t even heard of this LinkedIn capability.  But the real question is should your views of other LinkedIn members remain private, or should these fellow members know you’ve viewed them?  It’s a simple answer. If you are a job seeker, you need to do both.

When you view a recruiter or a member of your shared LinkedIn group, it could be beneficial to you if they are aware you have viewed their profile.  As a recruiter I can’t tell you how many people I have found projects for because they viewed my profile without ever directly contacting me.

I like to send them an InMail to ask them if I can help them with any information regarding a new project now or sometime in the future.  Obviously they were curious enough to look at my profile so maybe I can offer them help.  Frankly they are discreetly looking and these are the people many firms such as FYI prefer to introduce to their clients.

When you view a Member within a shared LinkedIn Group, that person may have hiring authority for that technology background you share an interest in.  They can offer to either review your resume, or offer you leads to departments or professional associates that have requirements for someone with your skill set.

On the flip side, if you are just starting to dip your toe in the water of a job search and you want to be discreet without every single person you viewed reaching back to you, make sure you have selected to remain Anonymous. Under Account & Settings, Privacy & Setting using the link Select what others see when you’ve viewed their profile. 

Also if you are actually scheduled to interview with people at a firm, we feel it is best you ask your recruiter to send you a PDF version of that person’s LinkedIn bio.  At this juncture, it is best to be discreet when viewing actual people that you’ll be interviewing with.

Feel free to contact me or anyone at FYI Solutions if you have any questions or comments.

 

 

 

 

Share
Apr 29, 2014

TOP IT JOB SKILLS FOR 2014: BIG DATA, MOBILE, CLOUD, and SECURITY

Author: Michele D’Aries

IT executives and technical recruiters attended TechRepublic’s roundtable and talked about their predictions in the hiring for technical talent. Garth Schulte, Trainer, CBT Nuggets talks about Big Data and how companies realize the importance of storing and analyzing large volumes of data. Big Data is an area where there will be jobs for technical talent. IT workers who take the time to expand their skills will quickly find that more opportunities are available for them across the job market.

Another area for growth is UX engineers. Clients are realizing that a greater user experience really can make a difference to their bottom line and provide a healthy return on investment (ROI).
More and more clients are embracing the cloud so that will be another area for jobs in 2014. This is a way for Clients to increase capacity or add capabilities without investing in new infrastructure. The Cloud is expected to grow 17.4 percent every year until 2017, according to Gartner’s public Cloud Services Forecast Overview. With more and more companies turning to cloud solutions, employers will be looking for people who have cloud related skills including Hadoop and Python and Ruby programming. IT Security is another role with all the high profile hacking at major retailers. By hiring security professionals, companies can take steps to prevent security breaches.

The article talks further about the top IT jobs for technical talent that will be in demand. The top three roles in demand will be: Software Developers, Database Administrators (DBAs) and IT security. Also, these are the skills sets that people should consider adding in 2014 so they will be able to bring value to a client.

Technology skills aren’t the only factor to consider when employers are assessing candidates for IT jobs. Employers take a look at the candidates interpersonal skills to ensure new candidates will fit into the cultural of the company. Employers are looking for candidates to be able to collaborate and the ability to communicate with business users. They are also looking for people who are good communicators and for candidates that understand the business domains, such as marketing, sales, and finance. Employers are increasingly seeking people with knowledge of business disciplines in addition to tech skills, whether it’s a developer who understands the supply chain in retail or a JAVA developer with experience in Financial Derivatives in trading systems. Employers hire candidates for culture fit first and then the experience for the position.

Article: http://www.techrepublice.com/article/top-it-job-skills-in-20140big-data-mobile-cloud/

FYI Solutions, located in Parsippany, NJ, specializes in advanced analytics working. We work with successful companies like yours that are concerned with:
• Inability to leverage all of their data, especially unstructured data
• having an insufficient analytics platform that does not provide accurate, timely information for decision-making
• over-dependence on Excel
• inability of their current systems to support your growth

Do any of these strike a chord? If so, contact us at FYI Solutions and we will help you begin your Analytics Journey.

 

 

Share