A Path to “turbo-charge” M-Files

My name is Ted Willich and I am CEO of Jacksonville, Florida-based NLP Logix. We provide machine-learning as a service which means we can take your big data, build predictive models for just about any task your enterprise can think of and deliver the results back to you. From predicting which individuals will respond to a marketing campaign to identifying which patients will develop a certain disease, the opportunities are almost endless. In fact, here is a list of companies on Kaggle paying big bucks to build the best predictive models to solve problems.

I chose this topic of my first guest blog on Information 121 because I strongly believe that Le Roux Cilliers and the team at Laminin Solutions are about to increase the capabilities of M-Files dramatically. It all revolves around the topic of “big data” and what you can choose to do, or not do, with that precious asset of your enterprise.

First, what is “big data”? If you are an M-Files customer that is an easy answer because all you have to say is “it is the information that I store on the M-Files platform.” If you are not an M-Files client, you can then say “it is all the information that I store in various places across my enterprise.” But the bottom line is, big data is your information that your organization has collected, stored and use every day.

So how is the Laminin Solutions team able to “turbo-charge” my M-Files tool?

Simply, by integrating predictive modeling/machine learning into the M-Files solution. That’s right. What was once the exclusive domain of BIG enterprise-wide solutions delivered by companies like SAS and IBM (SPSS) is now available to the small to medium-sized enterprise. The good news is this powerful tool is delivered at a fraction of the cost of those big legacy systems which can run into the hundreds of thousands of dollars per year and that is just for the server license!

So how do we do it? Simple. We use best of breed data science tools and build the models right at the application program interface (API). In other words, we communicate directly with M-Files to access and score your data with our models and then deliver the results right back to you without ever leaving M-Files. No expensive tools, no additional hardware and no more worrying if your business is getting the most value from your “big data”.

We believe our approach to modeling is part of our competitive advantage. We fit and test our models at the API/library level. While this is more challenging than going through a point and click wizard interface, it provides more flexibility and produces much better results that we don’t feel we could achieve using traditional tools like SAS and SPSS.

In less technical terms try this analogy along with the pictures below: Imagine your data is contained in a race track and to extract it you have to drive a car around it at very high speed. The way we build models we are sitting behind the steering wheel of that car going 200 mph. The way SAS and SPSS build theirs, is like driving that car from the grandstands using a remote control device.

Our Perspective of Your Data When Building a Model

Our Perspective of Your Data When Building a Model

Their Perspective of Your Data When Building a Model

Their Perspective of Your Data When Building a Model

0 Pings & Trackbacks

Leave a Reply