Mobile Monitoring Solutions

Close this search box.

Evaluation and comparison of open source software suites for data mining and knowledge discovery

MMS Founder

Article originally posted on Data Science Central. Visit Data Science Central

An article by A.H.Abdulrahman, J. M. Luna, 2 M. A. Vallejo 3 and S. Ventura with the title “Evaluation and comparison of open source software suites for data mining and knowledge discovery” (published by Wiley “Data Mining and Knowledge Discovery, Vol 7 Issue 3 2017 see this link) provides the research community with an extensive study on different features included in any data mining tool. The final score for usability, as for 2017, looks as this:

The conclusion of this study is “RapidMiner, KNIME, and  WEKA appear as the most promising open source data mining tools on a basis of the two specific evaluation procedures”

A few comments for this analysis approach may follow: the above consideration did not include the R project, and it seems favors Java-based software. Also, there is no distinction between “application”, “framework” or “environment”. For example, DataMelt is an environment, rather than a self-contained application, and it also includes Weka as an additional external package used in the DataMelt scripting environment for data scientists.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Presentation: Multi-service Reactive Streams Using Spring, Reactor, and RSocket

MMS Founder

Article originally posted on InfoQ. Visit InfoQ

Is your profile up-to-date? Please take a moment to review and update.

You will be sent an email to validate the new email address. This pop-up will close itself in a few moments.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

25 Statistical Concepts Explained in Simple English – Part 2

MMS Founder

Article originally posted on Data Science Central. Visit Data Science Central

This resource is part of a series on specific topics related to data science: regression, clustering, neural networks, deep learning, decision trees, ensembles, correlation, Python, R, Tensorflow, SVM, data reduction, feature selection, experimental design, cross-validation, model fitting, and many more. To keep receiving these articles, sign up on DSC

25 Statistical Concepts Explained in Simple English

To make sure you keep getting these emails, please add to your address book or whitelist us.  Part 1 of this series, can be found here.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Digital Decisioning Platforms – A New Way to Slice the Pie

MMS Founder

Article originally posted on Data Science Central. Visit Data Science Central

Summary:  Digital Decisioning Platforms is a new segment identified by Forrester that marries Business Process Automation, Business Rules Management, and Advanced Analytics.  For platform developers it’s a new way to slice the market.  For users it eases integration of predictive models into the production environment.


As analytic platform vendors and the many startups in advanced analytics & AI have approached the market there have been primarily three models.

  1. Provide a general purpose analytic platform. (e.g. SAS, IBM, Alteryx, Rapidminer)
  2. Create an analytics platform for a targeted enterprise-wide process across many industries. (e.g. sales:Einstein, marketing B2B: Radius, HR: Stella, legal: Everlaw, etc.)
  3. Focus on the analytics needs of a single vertical industry and typically a single vertical process (advertising: dataxu, real estate: Redfin, agriculture: BlueRiver).

In each case, the vendor is seeking to maximize their market reach in some defensible manner.  In terms of what is most popular, or at least most talked about, the third represents the currently hot ‘Vertical Strategy’ most favored by VCs for AI-first startups.


A New 4th Business Model for Analytics

Now Forrester is claiming to have spotted a 4th emerging new model that they are labeling ‘Digital Decisioning Platforms’ in their first and most current Q4 2018 review (find a copy here).

This interesting new segment seeks to be a general purpose platform across all enterprises that marries three existing capabilities:

  • Business Process Automation
  • Business Rules Management
  • Advanced Analytics


Digital Decisioning, or Cognitive Automation as KPMG prefers to call it in the graphic above is a rapidly expanding category.

You might argue that several existing platforms for example in the CRM and Marketing or Advertising space also combine these features, but the key here is that these digital Decisioning platforms must be widely applicable across many use cases within an enterprise.

Use cases can undoubtedly be found in many places ranging from HR to accounting, to supply chain management.  However the most prevalent cases are likely to involve enhancing customer interaction in both an optimum and consistent fashion. 

According to a separate Forrester report, “A key challenge of digital business is deciding what to do in the customer’s moment of need – and then doing it.  Digital Decisioning software capitalizes on analytical insights and machine learning models about customers and business operations to automate actions…for individual customers through the right channel.”



Forrester identified and reviewed 11 vendors using its ‘New Wave’ format reserved for situations in which the segment is new and in-depth user reviews aren’t judged possible.  This is an eclectic group of providers including some well-known analytic platform vendors, but including others from the BPA and business rules automation side of the tracks. 

Recognizing that the segment is new, Forrester and many of the user reviewers find that many of these offerings have major shortcomings in one or more areas.

While BPA and business rules automation are surely of interest to their users, I was most interested to focus on how well predictive analytics had been integrated into the package.  Of the ten criteria used by Forrester in its review, two relate specifically to this issue.

  • Analytic Features: Which data analysis features are built into the platform and how well do they work.
  • Functional Integration: What integration features does the product provide to create and consume externally created and hosted decision models.

The results on these two criteria are relatively easy to predict.  The vendors we know from the data science space, FICO, IBM, TIBCO, and SAS all score well on Analytic Features while other do not.  However, in this group only SAS scores high on the ability to utilize external models.

My take is that this has to do more with growing pains in a new platform and not an intentional oversight.  I expect this will be a focus feature in coming releases.


What Do the Users Say – and Who Are They

Keeping in mind that there are only a few reviewers for each platform two sets of comments stood out to me.

No-code was mentioned as a specific appreciated feature but only appeared in 3 of the 11 reviews.  It’s not clear if any of the other 8 providers offer a no-code developer environment.

Long learning curve was called out by several reviewers.  Probably to be expected in a new category offering with few prior users.

Trying to read between the lines, it’s evident that the data science team is not the primary user.  It appears that these platforms are designed to be used primarily by teams combined of application designer/developers and business analysts focused on process improvement.


What about the Data Science

There’s also no mention of any automation of the model creation, nor frankly did I expect to find one.  It appears that the internal data science team continues to have a key role in the creation and maintenance of the embedded analytics.

In this regard, the ability of the platform to intake previously designed predictive models from different platforms may be as important as the ability to build models natively within the platform.

My guess is that this will appeal most to companies already on their Business Process Automation journey.  The challenge that these users have had so far is the common one in data science.  That is, how to implement predictive analytic models within the business process so that the end user finds the result easy and seamless. 

This marriage of BPA and advanced analytics seems like a natural and logical extension.  The challenge is that this is still a custom development platform and not the tightly focused ‘full stack’ solution offered by ‘vertical strategy’ developers. 

Not that these ‘full stack’ solutions are plug-and-play.  All of these require custom developed and maintained predictive models.  If your developer group values a single platform to use across many applications in your business, you’ll want to give these a look.



Other articles by Bill Vorhies.


About the author:  Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001.  He can be reached at: or


Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

POTUS and the Stock Market

MMS Founder

Article originally posted on Data Science Central. Visit Data Science Central

For those who follow the stock market, October’s been a pretty rough month, with overall market levels, as measured by major indexes such as the Russell 3000 and the Wilshire 5000, now down into correction territory of 10 percent declines. The falls, unfortunately, closely follow a September 29th speech by the president “touting” his tangible influence on the market. Not very good timing.

Presidents generally shy away from stock market attribution claims, aware that their influence is indirect through the economy at best, and knowing that what goes up can quickly come down — remember 2008? Former treasury secretary and Harvard president Larry Summers is pointed in his assessment, noting “It’s crazy for a president to wrap himself in the stock market”. Investor’s Business Daily apparently doesn’t share Summers’ view, opining “Since the day Donald Trump was elected president in November of 2016 the Dow Jones industrial average has risen by some 35%, making the last 14 months one of the greatest bull-market runs in history.” Alas, the IBD article was written six months ago, when all was good.

I agree with Summers and, ever the data geek, just had to test the rosy claims against current data. So I downloaded daily returns of the Wilshire 5000 Index with dividends re-invested. I then contrasted the Wilshire performance of the sitting potus from inauguration day through October 29 against comparable time frames from the potus predecessor’s two administrations. My “analytics” were simply percent changes of the Wilshire index over time. I acknowledge the cynic’s claims that I chose this time purposefully, just as I point out that the potus and IBD were purposeful in the timings of their promotions as well.

The technology used for the computations included JupyterLab Beta and Microsoft Open R 3.4.4.

Read the entire blog here.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Wave 2 Agile: Living the Agile Mindset

MMS Founder

Article originally posted on InfoQ. Visit InfoQ

Living the agile mindset means actually doing it, not just talking about it. Living agile is only accessible to those who say yes to personal growth in a big way. If you want different behaviours in your organization, change your own behaviour. This is what Michael K Sahota is calling “Wave 2 of Agile”, and invites everyone to join.

Michael K Sahota, trainer and consultant at Agilitrix, gave a talk about living the agile mindset at eXperience Agile 2018. InfoQ is covering this conference with Q&As, summaries, and articles.

Sahota presented what he calls the “waves of agile”. Wave 1 is about the ideas: learning, collaboration, responding to change. Wave 2 is about actually living the ideas. Walking the talk; not just talking about these things, but modelling them.

Culture is the number one challenge in organizations. We need the doing and being agile, doing the practices and understanding why we are doing them, argued Sahota.

With agile the leader’s role is pivotal. Sahota stated that “as leaders, we need to learn how to give away power and coach our people in how to receive it”.

InfoQ interviewed Sahota about living the agile mindset, and what agile coaches and leaders can do to support it.

InfoQ: How can wave 2, living the agile mindset, look in practice?

Michael K Sahota: In wave 1 we tell business owners that they need to “respond to change” when building the project takes longer than we hoped. But how are we at responding to change? How do we react when someone misses a meeting or appointment? How do we react when someone does not deliver on a commitment? If you are normal person, then these situations will cause some sort of emotional disturbance what will limit our ability to “respond to change”. If we were living agile, we would flow with these situation and with life. When we are committed to a learning mindset, we see these limitations in ourselves and choose to grow. That is living the agile mindset. It is a place of radical high performance and only accessible to those who say yes to personal growth in a big way.

Let’s take an example of an agile team retrospective. Let’s say we are looking at collaboration between team members. What is the real blocker here? It’s about our behaviours. If we are committed to learning and improving, then of course we will look into our own behaviour and make changes.

InfoQ: What can agile coaches do to help leaders go first with agile?

Sahota: All too often as coaches we fall into a trap. And I know. I lived in this trap for years so I have some expertise …

The trap is that we ask leaders to do things that we don’t do ourselves. For example, we tell leaders that they need to be coached, but we aren’t getting coached ourselves. But more importantly, it is to look at the places where we are modeling collaboration. Modeling learning ourselves.

A very simple test and guard-rail that everyone can do right now is to “Ask for Permission”. A lot of time we end up inflicting help on people who don’t really want it.

Before we can ask leaders to model a new way of working, we need to model it first. Otherwise it just doesn’t make sense.

InfoQ: How can we foster a culture where people pull in change instead of resisting it?

Sahota: Hahaha. There is a lot to say on this…

The simple answer is simple, but will take work to put into practice. We are deeply conditioned by families, education system, and society to push. We push by making people do things, selling our ideas, trying to convince others, evangelize, etc. All this does is create resistance. The secret is to first see these behaviour patterns in ourselves and overcome them. Then we can help others. That is how we create culture – by how we show up.

InfoQ: What’s your advice to leaders who want their organization to live the agile mindset?

Sahota: It is really simple to understand. The organization reflects the behaviours of the leader. If you want different behaviours in your organization, look at changing your behaviours. It is hard, but it is the only thing that will work. People become leaders when they lead. Leading means going first. Every manager and coach has a choice about whether they want to be a leader. So the question is: how ready are you to go first?

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Electron 3 Release Increases Stability

MMS Founder

Article originally posted on InfoQ. Visit InfoQ

The Electron team recently announced the release of version 3 of Electron. This release includes numerous enhancements and improvements, including support for reading massive files, better APIs for managing applications, and logging and performance measurement capabilities.

Like many modern software projects, Electron strives to have more regular releases with smaller breaking changes between releases. With just four months between the 2.0.0 and 3.0.0 release, and beta releases of version 4 already underway, Electron aims to provide a rapidly stabilizing and improving platform for building desktop apps with Node.js, Chrome, and other modern web development APIs.

Electron 3 updates its major dependencies to Chrome version 66.0.3359.181, Node.js version 10.2.0, and V8 JavaScript engine version 6.6.346.23.

One of the challenges with Electron was distinguishing between development and production applications. The new app.isPackaged property returns a boolean if the application gets packaged for a production release.

Another challenge in early Electron releases was determining if an application was ready. app.isReady() checks if Electron is ready and app.on('ready') is the way to be notified when the app is ready. Creating source code that could get called at any time would require first checking app.isReady(), and if false, subscribing to the app.on('ready') event. The new app.whenReady() function encapsulates that sequence by returning a promise that gets fulfilled when Electron gets initialized.

To provide more performance profiling details, the new process.getHeapStatistics() API returns the same heap measurements provided by the V8 JavaScript engine. Also, the new netLog API provides dynamic logging control. net.startLogging(filename) and net.stopLogging([callback]) control when network logging begins and finishes.

File system access has also improved with the Electron 3 release.  fs.readSync now supports loading of massive files. Improvements to file system paths include Node.js file system wrappers to make fs.realpathSync.native and fs.realpath.native available to Electron applications. The new TextField and Button APIs are part of a larger initiative to add standard user interface controls.

Electron 3 also improves user experience APIs. win.moveTop() makes it possible to move the window z-order to the top without changing the focus of the user input to prevent interrupting users unexpectedly.

Complete lists of breaking changes and bug fixes in Electron 3 are available in the Electron 3 release announcement.

Numerous improvements are already underway for Electron 4 and progress may be viewed with the Electron releases summary. With this and future releases, Electron continues to improve on its powerful platform for building desktop applications with web technologies.

Electron also has an App Feedback Program to allow developers to provide early testing and feedback during the beta release cycle. For the 3.0 release, the Electron team thanks Atlassian, Atom, Microsoft Teams, Oculus, OpenFin, Slack, Symphony, VS Code, and other program members for their assistance.

Electron is available via the MIT open source license. Contributions are welcome via the Electron GitHub organization and should follow Electron’s contribution guidelines and code of conduct.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Adding intelligent features to your Android apps – Video tutorial

MMS Founder

Article originally posted on Data Science Central. Visit Data Science Central

Machine Learning for Android App development Using ML Kit [Video]

This course is for Android developers who want to try their hand at building a machine learning app using the new ML Kit SDK that Google recently released. This course does not require any previous knowledge of machine learning, as a basic introduction will be given so that you can fully understand the content.



ML Kit makes it easy to apply ML techniques to your apps by bringing Google’s ML technologies together in a single SDK. With ML Kit you can have features such as text recognition, face recognition, barcode scanning, image labeling, and landmark recognition at your fingertips in your apps. In this course, you are going to implement all these features in your Android applications using ML Kit.

After completing this course, you will be confident enough to build Android applications equipped with in-built Machine Learning features, providing an amazing user experience. You will be able to go into the world and create your own useful Machine Learning apps using ML Kit.

All the codes are present at:


What you will learn:

  • Explore how machine learning is changing the world we live in.
  • Configure UIs with camera settings and use them in your app.
  • Implement text recognition and deploy it with Firebase on the cloud.
  • Perform face detection by adding it to your app and trying it out!
  • Scan through barcodes by adding the barcode scanning feature to your app.
  • Identify images by image labeling and deploy them with Firebase on the cloud.
  • Add features such as landmark recognition to your apps to identify a specific landmark.


About the Author:

Yusuf Saber is an Android developer with over 5 years’ professional experience. Yusuf earned his Master’s degree in Computer Engineering from Ryerson University in 2011 and started his career as a .NET developer before quickly turning to Android. He has worked on a large range of Android apps, from social to multimedia to B2B, and more!


Get a preview of the Video here

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Microsoft Announces General Availability of Azure IoT Central

MMS Founder

Article originally posted on InfoQ. Visit InfoQ

Microsoft has announced the general availability of Azure IoT Central, a software as a service solution for working with the internet of things. With Azure IoT Central, Microsoft envisions a low-code approach to designing, developing, configuring and managing IoT devices, while providing out of the box security, scalability, and integration with processes and applications.

Azure IoT Central is built on top of Azure IoT Hub, Microsoft’s platform as a service solution for creating reliable and secure bidirectional communications between IoT devices and cloud solutions, much like Google Cloud IoT and AWS IoT. However, instead of just providing the tools to build and manage the entire IoT platform, Azure IoT Central makes this available to anyone even without cloud expertise, as stated by Arif Bacchus, staff writer at Digital Trends.

Built on the power of the Azure cloud, it is a is a fully managed SaaS offering for customers and partners that enable powerful IoT scenarios without requiring cloud solution expertise.

Customers can expect a plethora of capabilities provided by Azure IoT Central, ranging from quickly onboarding new devices for gaining insights and interpreting their data, without the need to have a high proficiency with IoT solutions. For example, it is now possible to connect new devices with zero-touch provisioning, using Device Provisioning Service. Additionally, jobs allow for bulk device management capabilities, with options to reboot, reset or update devices. Moreover, this is all done on the scale which expected in an IoT scenario and can target a specific device or groups of devices. Azure IoT Central also provides the notion of device templates, used to reuse existing configurations and configurations.

A device template is a blueprint that defines the characteristics and behaviors of a type of device that connects to an Azure IoT Central application.

When it comes to monitoring devices, monitoring rules allow to automatically trigger up to five actions, allowing to notify both humans and other systems. Subsequently, these actions can call on a wide variety of services, like sending out an email, leveraging Azure Functions to execute a piece of custom code, or calling webhooks for other services. Another possibility is to implement workflows through the integration of Microsoft Flow, providing access to many connectors to services inside of Azure and beyond. Moreover, along with the announcement of the general availability of Azure IoT Central, Microsoft also announced the incorporation of Connected Field Service, allowing to leverage pro-actively scheduled maintenance from the data coming from devices.


Another essential aspect of IoT scenarios is gaining insights into the data received from the various devices. Consequently, Azure IoT Central provides out of the box analytics, which allows gaining a better understanding of the data, with minimal configuration. Furthermore, Microsoft also offers a PowerBI solution, which works directly on the data from Azure IoT Central, providing additional capabilities to analyze the metrics for the IoT solution.

  • Track how much data your devices are sending over time
  • Compare data volume between telemetry, states, and events
  • Identify devices that are reporting lots of measurements
  • Observe historical trends of device measurements
  • Identify problematic devices that send lots of critical events

Finally, together with the general availability announcement. Microsoft also adjusted the pricing of Azure IoT Central. It now allows working with a small number of devices for free, while also giving a downsloping pricing model.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Sentiment Analysis: Types, Tools, and Use Cases

MMS Founder

Article originally posted on Data Science Central. Visit Data Science Central

What do you do before purchasing something that costs more than a pack of gum? Whether you want to treat yourself to new sneakers, a laptop, or an overseas tour, processing an order without checking out similar products or offers and reading reviews doesn’t make much sense anymore. Thanks to comment sections on eCommerce sites, social nets, review platforms, or dedicated forums, you can learn a ton about a product or service and evaluate whether it’s a good value for money. Other customers, including your potential clients, will do all the above.

People’s desire to engage with businesses and the overall brand perception depends heavily on public opinion. According to a survey by Podium, 93 percent of consumers say that online reviews influence their buying decisions. Users may not give you a chance once they’ve read a few bad reviews. They won’t research whether feedback was fake or not. They’ll choose another option. In this context, organizations that constantly monitor their reputation can timely address issues and improve operations based on feedback. Sentiment analysis allows for effectively measuring people’s attitude towards an organization in the information age.

What is sentiment analysis

Sentiment analysis is a type of text research aka mining. It applies a mix of statistics, natural language processing (NLP), and machine learning to identify and extract subjective information from text files, for instance, a reviewer’s feelings, thoughts, judgments, or assessments about a particular topic, event, or a company and its activities as mentioned above. This analysis type is also known as opinion mining (with a focus on extraction) or affective rating. Some specialists use the terms sentiment classification and extraction as well. Regardless of the name, the goal of sentiment analysis is the same: to know a user or audience opinion on a target object by analyzing a vast amount of text from various sources.

You can analyze text on different levels of detail, and the detail level depends on your goals. For example, you may define an average emotional tone of a group of reviews to know what percentage of customers liked your new clothing collection. If you need to know what visitors like or dislike about a specific garment and why, or whether they compare it with similar items by other brands, you’ll need to analyze each review sentence with a focus on specific aspects and use or specific keywords.

Depending on the scale, two analysis types can be used: coarse-grained and fine-grained. Coarse-grained analysis allows for defining a sentiment on a document or sentence level. And with fine-grained analysis, you can extract a sentiment in each of the sentence parts.

Coarse-grained sentiment analysis: analyzing whole posts/reviews or sentences

This analysis type is done on document and sentence levels. In fact, most specialists use it to analyze sentences rather than whole documents. Coarse-grained SA entails two coherent tasks: subjectivity classification and sentiment detection and classification.

1. Subjectivity classification. First, it’s necessary to determine whether a sentence is objective or subjective. An objective sentence contains some facts about an object or topic: Three strangers are reunited by astonishing coincidence after being born identical triplets, separated at birth, and adopted by three different families.

A subjective sentence, as the name suggests, expresses someone’s attitude regarding a subject: This apartment is wonderful. I enjoy every minute I spend in here.

2. Sentiment detection and classification. The goal of this operation is to define whether a sentence has a sentiment or not and if it does, to determine whether the emotion is positive, negative, or neutral.

Sentiment score×225.jpg 300w” sizes=”(max-width: 683px) 100vw, 683px” />

Sentiment by polarity. Source: KDNuggets

Sometimes people share their points of view without emotions. For instance, the author of the sentence I think everyone deserves a second chance expresses their subjective opinion. However, it’s hard to understand how exactly the writer feels about everyone. So, the sentence doesn’t express a sentiment and is neutral. Neutral sentences – the ones that lack sentiment – belong to a standalone category that should not be considered as something in-between. 

Let’s look at this comment: One of the most surprising and satisfying movies of the year. According to the phrase, the reviewer enjoyed the movie, so this sentence contains a positive sentiment.

And the following review is a clear example of a subjective sentence with negative sentiment: The fact that it’s also clumsily made and rife with mediocre performances seems almost beside the point in the context of how pointless this thing is in the first place.

However, objective sentences can also express a sentiment: I bought this waterproof camera case because it’s meant to be more reliable than a standard one. It’s clear from the context that the case wasn’t what the person expected. The sentence has a negative sentiment, but it’s expressed implicitly.

Sentiment doesn’t depend on subjectivity or objectivity, which can complicate the analysis. But we still need to distinguish sentences with expressed emotions, evaluations, or attitudes from those that don’t contain them to gain valuable insights from feedback data.

Fine-grained sentiment analysis: analyzing sentence by parts

The devil is in the details, as they say. If you need more precise results, you can use fine-grained analysis.

You apply fine-grained analysis on a sub-sentence level and it is meant to identify a target (topic) of a sentiment. A sentence is broken into phrases or clauses, and each part is analyzed in a connection with others. Simply put, you can identify who talks about a product and what exactly a person talks about in their feedback. In addition, it helps to understand why a writer evaluates it in a certain way.

The fine-grained analysis is useful, for example, for processing comparative expressions (e.g. Samsung is way better than iPhone) or short social media posts.

Not only does it allow you to understand how people evaluate your product or service, it also identifies which feature or aspect they discuss: A touchpad on my laptop stopped working after 4 months of use. This way, you know exactly what must be improved or reconsidered.

The capability to define sentiment intensity is another advantage of fine-grained analysis. In addition to three sentiment scores (negative, neutral, and positive), you can use very positive and very negative categories.

How to conduct sentiment analysis: approaches and tools

Sentiment analysis allows you to look at your operations from a customer point of view. But how do you extract that knowledge from user-generated data?

Data collection and preparation. First, you need to gather all relevant brand mentions in one document. Consider selection criteria – should these mentions be time-limited, use only one language, come from a specific location, etc. Then data must be prepared for analysis: one has to read it, delete all non-textual content, fix grammar mistakes or typos, exclude all irrelevant content like information about reviewers, etc. Once we have data prepared, we can analyze it and extract sentiment from it. 

As dozens or even hundreds of thousands of mentions may require analysis, the best practice is to automate this tedious work with software.

Keep reading…

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.