What do you like best about Ghost Inspector?
- Detailed failure messages and video of what happened during tests; it was easy to figure out why a particular test ailed without re-running it.
- API integration; we were able to wire our test suites into our CI pipelines without oo much additional effort (using Azure DevOps plugin & integration with Pager Duty for failures)
- Responsive support - loved that you guys are available to answer questions whenever we have them
- Data-driving - we used this feature to capture and drive tests in different environments with different master data (single test, multiple execution) which reduced cost of ownership
- Ability to build component-ized tests which were constructed from re-usable script components; massively reduced the cost of ownership by removing duplication between tests
- Costs are relatively low for our entire org when compared with other commercial tools (HP, Ranorex, Tosca)
- Quickly recording data migration or other ad-hoc automation jobs is a cinch, accelerating data setup and other jobs Review collected by and hosted on G2.com.
What do you dislike about Ghost Inspector?
As a long term heavy user of GI, the following are listed as dislikes, but hopefully the suggestions would be considered by the team :)
- Our app relies heavily on succesful API calls to sync the test with the app, GI provides no mechanism to wait for the result of a specific api call before continuing execution. This resulted in flaky tests which would sometimes work and sometimes fail depending on how rapidly the api responded.
- Without care, tests ended being a long list of browser instructions (click, check, navigate, etc) and the sense of the test was lost. It would be great to see a way to group / folder up steps within a test that could be labelled with the intent of the test (labels per step are a half-way house, but require test authors to use them ...)
- global / test variables are very useful, and it would be even better if they were suggested automatically when writing steps (autocomplete / suggest)
- automatically generated element locators based on ARIA markup (role, lable, for, etc) would be better than the current DOM structure based approach (td > div > etc).
- source control; there isn't any - whilst you can extract and source control tests individually, once you start importing / calling other scripts all bets are off. An audit history for tests (who edited what / when) would go a long way toward working around this issue. Maintaining different versions of the test for different feature sets (feature switches) was unmanageable.
- API integration - stopping a test run is not currently included in the Azure Devops integration, so cancelled runs have to be manually stopped in GI to avoid consuming minutes
- You can only wait for up to a minute for an element and this is global for an entire test, so if you have a step or a process which takes longer than this, you have to use a static wait which introduces flakiness into your tests. It would be great if there was a method which allowed you to wait for an element to be shown (or hidden) with a timeout that's longer than 60s. Review collected by and hosted on G2.com.