Inventory Challenges in Multi-Channel Retailing

When doing any eCommerce or marketplace integration one of the key questions & decisions is around stock handling in multi-channel retail businesses.

This post describes the challenges businesses face, particularly those with longer assembly and manufacture times or those operating a resell business.

We need Real-Time stock!

The nirvana! Once upon-a-not-too-distant-past this was nearly possible.

Your ERP system was your master, and only inventory store. Your sales/customer services team would enter orders into this system; your stock would be dispatched & received into this system. There was only a single truth.

In this age of multi-channel reselling this simply isn’t possible.

What is real-time?

To us, a real-time system reacts almost instantly i.e. to the microsecond, or millisecond. Examples of real-time systems are avionics and flight control systems.

OK, so you may not need that level of accuracy. The point being that any multi-channel retailing will be subject to considerable timing issues making a global, consistent view of inventory in real-time or even accurate to 5 minutes impossible. There are just too many moving parts.

Multi-Channel Inventory

Challenges

Let’s first tackle the challenges with inventory updates across the different channels.

eCommerce

Your eCommerce store in almost all cases will maintains its own copy of inventory. Self-hosted eCommerce stores e.g. Magento, WooCommerce may have facility to update/synchronise inventory quickly and many times throughout the day. This instance allows you to get pretty close to a consistent inventory.

Hosted solutions such as Shopify however limit or throttle the number of updates. Large number updates will be delayed by seconds or minutes.

Marketplaces

If you update a marketplace’s available inventory, that marketplace has another view or copy of inventory. Marketplaces are typically bound by the same restrictions which apply to hosted eCommerce stores i.e. you are limited with the number of updates you can make to that inventory.

Compound this with that some marketplaces can more easily updated than others.

Amazon requires the same heavy weight batch update to update a single item as updating many thousands of items. Furthermore, the update is asynchronous i.e. your request gets put into a queue where it could take anywhere between a few seconds to a number of minutes to process. In addition the number of inventory updates is subject to throttling. In essence Amazon really expects sellers to batch their entire inventory and for that inventory to be updated several times a day.

Other marketplaces such as Walmart however can be updated with either a single lightweight JSON REST request or with a batch upload; but both are subject to throttling.

Order Processing

You may be able to get your eCommerce solution to post orders into your ERP within seconds, but Marketplaces require you to poll for new orders. Polling across multiple channels may take several minutes dependent on response time and dependent on any request throttling.

If you combine this with other sales channels such as Point of Sales and salespeople entering orders through a mobile or tablet based application you end up with a lot of contention around available stock, particularly around fast selling items.

What are the remedies?

Segmented Stock

If you’re selling through a number of channels this may not be appropriate. However, you may wish to consider this approach for fast selling channels such as Amazon or your top 10 best-selling items.

Safety Stock

In this approach you maintain a reserve quantity of inventory. Channels can sell their given quantity, and there should be enough inventory to satisfy each order.

This isn’t failsafe, and you need with analysis and planning you should be able judge the expected sales with consideration around the marketplace rules, such as cancellations, maximum time to ship, etc.

Conclusion

Don’t be fooled, there is no easy solution! You need to pick a solution which is suitable for your business or contact us to discuss your problem.

Five favourite parts of IMan

If you’re working with a single product day-in day-out for three and a half years you can sometimes lose sight of what you’re trying to achieve: tunnel vision! My aim in creating IMan is to create a general purpose integration platform with a strong focus on usability and one which is flexible enough to cover 95% of all integration requirements.

Have these been achieved? I would like to share some insights on the development process, the goals, and some of the problems I encountered along the way.

The Preview

The preview grid, situated at the right half of each transform setup interface, is populated with the results of the transformation each time the ‘Refresh’ button is pressed. It is one of the key pieces to facilitate easy and rapid integration design and, as one user put it, with the preview area ‘you’re not working blind’. Development, however, was not easy! Several weeks were spent, several calls made to Microsoft developer support, and many grey hairs grew! The end product involves a multi-threaded WCF windows service with out-of-process COM Interop (I learnt a lot)! The end result is that multiple designers can be working simultaneously on a single instance of IMan.

Common Connector Interface

The same interface is used for mapping Sage300 as Sage200 as SageCRM. This has been an evolutionary development, but it results in a common interface for every application, especially helping those consultants working with multiple products.

Hierarchical Support & Associated Transforms:

There was no point in producing another integration product where hierarchical data support was shoe-horned, so IMan has been built from the ground up to support hierarchical data. The goal is to make the design process as intuitive as possible, whilst increasing the software’s flexibility.

Free Form Designer

To achieve the ‘95% integration coverage’ IMan is designed so that each transform/node within the integration can be linked to the previous transform ad infinitum, permitting multiple transforms to be linked or branched to build complex integration logic.

The major challenge was to provide a design surface which allowed interactive diagram creation similar to that of a flowchart i.e. to visualise the underlying implementation. Also, it needed to be in a web environment.

To achieve this, the only real choice was to use an off-the-shelf diagramming component (not an easy task as it needed to conform to web 2.0; it was 3.5 years ago). In-house development of such a component would have been sheer lunacy.

The result is the design surface we have today. We believe it facilitates everything from simple dataflows through to more complex integrations. Further, due to the flexibility, we’ve seen several of our customers use IMan for complex process automation, far beyond my original intention of IMan as a general integration tool.

Drop In Connectors & SDK

As a general integration tool, IMan required a means to connect other applications through their APIs, particularly with the marked increase of hosted applications where a webservice connection is the only choice. The initial thought was to create a generic webservice connector, but the different protocols, implementation and necessary logic, plus a UI to configure it was just too complex (just look at SData for example), and out-of-step with IMan’s focus on usability.

To facilitate this, we’ve taken a low level approach and added the ability for third parties to develop their own connectors for external applications, web services, etc.

I believe we’ve taken the correct approach as IMan gives you the ability to connect to any application, not just webservices; further, it enables the developer to add ‘smarts’ into any connector.

From a developer’s perspective, I think the thought of connector add-ins is awesome!

Our partner, EC Internet, has already developed the Magento connector, with two more on the way for Vortex ASP.Net Storefront eCommerce platform and Sage Pro.

Least Favourite

It’s only fair to share this, so here’s the hall of shame! Needless to say each of the points below are on the enhancement list:

The Diagram Component

It’s not that bad, but it can be slow, clunky, and connectors can be difficult to manipulate, especially on large graphs.

Setting Up Hierarchical Datasources

Field names aren’t inferred for file based datasources, so you need to enter them manually, and the screen handling could be better!

Setup Tab Load

Why does it take so long to click on the setup tab?

Load Time

Not our fault, but I am pretty sure we could do better!

We’d like to know your thoughts: have we achieved our design goals? Are there other clunky features which need improving?

by David Weaver, Marketing Manager, Oakley ERP

No doubt you are looking over the Realisable IMan website as you have identified a pain point between your business software applications. The reasons are numerous, but the motive is likely to bring back to one familiar problem…lack of integration between your website, industry solution and Sage accounting software.

You are not alone and in actual fact at Oakley ERP we have successfully paired IMan integration with Sage with over 60% of our customer base.

Whether this is connecting a bespoke software application or a more recognised solution such as Concur Expenses we have successfully merged our development skills and experience of Sage with our strong partnership with Realisable.

Take advantage of our experience and take a look over 3 tips to gaining a smooth integration with Sage:

1. A Clear Understanding Of What You Want To Achieve

All too often it can become a blanket statement with the buzzwords of automation and integration. However it is more important to understand what you would gain from undertaking such a project. While it is all very well syncing data from one application to another, but what does the data sync actually achieve. For example you might be looking to free up staff that would have to manually key in the data which is not just time consuming but also prone to human error.

With a few calculations you will soon be able to start working out potential ROI from an integration project and of course while the numbers speak, don’t discount the emotional benefits of providing staff with better workflows.

2. Gain A Closer Understanding Of The Value To Assign To The Project

While an unfamiliar and custom integration project can take a heavy dip into the business budget IMan can change this perception, as the initial integration tool to Sage is built and requires a custom development around it to integrate with your application. While there can be bigger integration projects, generally this can make projects take a lot less time than you would think. The benefit of working with an experienced IMan partner is that having worked on a number of integration projects the estimates are likely to be close to the final figure as learning curves and similar projects can be called upon for reference.

At Oakley through our experience and expertise we are able to provide estimates for IMan integration projects with around 98% coming in as quoted.

3. Ensure You Appoint A Project Leader

If you are considering an Integration project with your Sage solution then it is vital to know that you have a go to contact appointed from your side to the project. They need to champion the project and help make the integration side of it work as smoothly as possible. There will be a lot of communication between both parties and having the assigned contact can really make the difference to a quicker and easier project completion and bringing it on budget.

About the Author

David Weaver works as Marketing Manager at Oakley Global, a leading Sage Business Partner and IMan Business Partner. Oakley have delivered many successful Sage projects and routinely include IMan development in the majority of proposals. Discover more about our leading Sage solutions Sage 200 and Sage 300.

by Fiona Skinner, Content Queen, Realisable Software.

I began my Realisable adventure 20 months ago when James keenly saw the chance to utilise my scientific background and “fresh eyes” to analyse the help documentation for IMan. Flaws in the original documentation needed attention as they led to missed sales, frustrated users, and all too many paltry support tickets. The logic for dragging me into it was that someone who has little tech experience would be similar to one of our clients trying to access the help and use the software. If it makes sense to me, there’s a better chance that it’ll make sense to our users; if our clients have an easier time learning to use our software, it will mean that our developers can spend less time answering support queries and more time creating better data solutions, answering more complex problems and ultimately improving our service.

Along with analysing the content of the help, there was also the question of format. James wanted to somehow convert the documents from the impenetrable PDF to an accessible, online, searchable resource. Our solution was to use a market-leading one-source publishing platform called MadCap Flare. It was with this product that I spent months of my life, learning HTML and CSS, and eventually converting our key user guides into the form that they are today.

On my first review of the user guide, my most common annotation was “Why?”. In response to almost every one of these, James told me to go and try it; whatever ‘it’ was. With an internal sigh, I went off to struggle with setting up email report groups, creating XML files,translating fields, understanding what an ‘expando’ field was, and to tackle my nemesis: hierarchical data keys. I got completely lost in the software; the reason I got lost was primarily down to confusion while interpreting techie explanations & instructions (or lack thereof!). It was hard work, but I now have a thorough understanding of the software, and have successfully created documentation that makes sense to me, which will hopefully help others avoid the struggle I went through.

We hope it will ease the learning curve, leading to less frustrated conversations and consequently a better experience.”
- James Hutchison -

Since I was simultaneously learning how to use the software, we were also finding bugs that had snuck through product testing! From the beginning, James saw the opportunity for a front-to-back software development approach, and his method clearly pays off, as my review process also analyses the software itself. I’ve thoroughly reviewed the basic functionality of the software; as a result bugs that were previously missed have been discovered, and we find increasingly more ways in which the whole package can be improved. Some bugs were easily fixed, but other adjustments are a part of the next version release of IMan and will result in an even stronger product!

The online help has been live since July 2015, and I continue to refine it.

If you have any experience you’d like to share with us about help usability and back-to-front software development, we’d love to hear about it!

find us on twitterFind us on Twitter get in touch on LinkedInor LinkedIn

Over the years we’ve integrated a large number of Magento sites with different Sage products hitting numerous issues along the way. Here we discuss the most common of these and how we’ve overcome them either in the Magento connector (developed by our partner, EC Internet) or through tweaking settings within Magento.

Slow API

Anyone who has had to get data into or out of Magento via its API will know about the dreadful performance. For transactional data (Orders, Shipments, etc) where the quantities are smaller this is usually ok, and with server infrastructure permitting, performance can usually be tuned to an acceptable point.

However, for master data such as product or customer records, where there are larger volumes and where there’s a need for frequent updates, the API is not only too slow, but can also result in database locking issues, update failures and the like.

To remedy this issue it is necessary to bulk update the data i.e. to execute the update server side, within Magento itself. Magento does have an import/export function, but is not conducive to a 24/7 eCommerce operation since it requires manual intervention and a specific CSV file format.

Our Magento connector addresses these by:

  • Incorporating bulk/batch upload and download functionality for customer & product data. For example, bulk update functionality has reduced stock updates of around 2000 products from 3hours to under 5 minutes.
  • Building the bulk functionality into the standard IMan integration flow, not only does this result in a configurable, easy to use mapping, but any errors are correctly reported onto the standard IMan audit report; enabling users to make any remedial action, easily and quickly.

Performance and memory usage

The poor performance of the Magento API is well known, but queries which are a result of API interactions can easily run afoul of memory issues (even the smallest ones). Settings to watch out for in your php setup are, memory_limit and max_execution_time and MySQL’s query buffer size innodb_buffer_pool_size. If you’re running into issues where API calls failing, altering these values may help.

We will be following up next month with a more comprehensive guide and tuning reference, so stay tuned.

Field Lengths

A common issue with any eCommerce/ERP integration are mismatches in field lengths; address fields are particularly sensitive. Any Magento implementation should review these, and how the fields will map to Sage, otherwise there’s a chance of miss-ships.

Fortunately, our connector has an inbuilt remedy to match address field lengths to the integrated Sage ERP solution.

Magento Extensions and their challenges to integration

One of the key benefits of Magento is the breadth and depth of the extensions (plugins) which can enhance Magento’s functionality in every perceivable way.

These extensions however can present a challenge to integration as a number of them will add and enrich data on any Magento object and this enriched data often needs to make its way into the ERP.

As Magento (pre version 2) lacks a descriptor or schema for its object model we can’t know ahead of time what fields are available. A means to dynamically interrogate the Magento objects to know what data is available so the necessary data can be passed through to your Sage accounting or ERP solution.

Again, our Magento connector does this transparently where product, order & customer attributes, and additional payment details are all automatically recognised, and the data can be mapped/integrated, based on requirements.

There are improvements to be made and our solution isn’t perfect. The existing Magento API doesn’t expose every piece of data, some of it squirreled away, with no API access. As part of our next release we’ll be implementing a feature which will allow lookups & translations directly against the Magento database (via a call to API), meaning every field can be gotten to.

Where do we go from here?

If you have any sticky issues or feature requests or just want to discuss the Magento connector in general please visit our forums:

/support/forum

In the last couple of weeks we’ve been doing quite a bit of work to our Sage300 connector to add update functionality to existing transactions.

When working in a non-UI, automated process, one of the key issues when updating a transaction is the ability to locate a particular line (or child record) within a transaction; so that it can be updated, deleted or a new one inserted should one not exist.

We’ve had this sort of functionality within our software for quite some time (to locate particular lines when working with O/E Shipments, Invoices, P/O Receipts, etc.), but we needed to revisit our line search method to implement the new update functionality. Given the different methods to search for transactions and records in Sage300, we did more than think about whether we were using the best line search strategy.

This blog attempts to describe our investigations and those various search methods available within Sage300 to search for transactions, and to explore what each of these actually do under the hood.

Order Property & Indexes

One of the dysfunctions of Sage300 is that index use is explicit. Whenever Sage300 executes a query, rather than letting the database devise which index to use, the query generated by Sage300 expressly states which index the database must use. I can only blame Sage300’s early days of running on Btrieve for this, but the result is that you must know how the index will affect the underlying query.

To alter the index is simply a matter of altering the Order property of the view.

One of the key discoveries of our research is that the Order property of a view can drastically affect the functionality of the search operation.

Cursor Accessed Indexes

It is worthy to note that Sage300 employs two strategies for retrieving records: simple and cursor accessor.

Simple indexes simply specify a where clause with an SQL statement.

Cursor accessed indexes use a stored procedure (with a server side cursor for positioning) to setup the query and subsequent calls to the stored procedure to move the cursor and return the next record. The code below illustrates the initial setup of a cursor based index.

Please note the explicit field list has been removed and replaced with * for brevity.

Sage300 then calls the stored procedure to return the next record from the query:

Whilst our testing indicates the index type is transparent to calling operation, the speed of cursor accessed indexes is significantly slower than simple indexes because of the SQL call to retrieve the next record.

Our investigation was a little inconclusive, and we’d be happy to hear from others on this point; but the use of cursor accessed indexes appears in Sage300 to be dynamic:

  • A stored procedure/cursor would be used when an index was specified but the search did not include any index fields. This would only occur on non-composed views or the parent view of a composed view.
  • Certain indexes always use cursors composed or otherwise. The API doesn’t provide any means to query if an index is cursor accessed or simple.

Read

The first access method is the most simple: the Read function is designed to return the single record pointed to by the values of the fields of the corresponding Order or index. The following code returns a single O/E Order, using the Order Number:

SQL Trace

If we look at the SQL trace for the Read method, not only does it select just a single record with the top 1 directive; it also provides a hint to use the index corresponding to the Order selected.

DBSpy

DBSpy issues a similarly short statement involving a single Get operation.

In terms of database access and speed, Read is certainly the most efficient since it only ever retrieves a single record based on an index.

Using Read to iterate multiple records

Read will only ever return the first matching record based off the values set to the fields used in the index. To iterate multiple records therefore it is not really a viable solution as it would be necessary to first set the key field values and then perform a read, and repeat.

Browse & Fetch

The Browse and Fetch operations combine to allow multiple records to be retrieved and iterated, and in contrast to the Read operation they allow you to search for records using non-indexed fields.

Specifying the Index of a Browse Operation

Like the Read function, the index used changes the way Browse and Fetch retrieve records. In the example below we’re retrieving the Customer record using the Customer’s Name, changing the index to 2 and specifying the single field in the ‘Put’ method used of that index.

SQL Trace

One would expect from the above the Browse operation would override any field set previously, however it does not. Instead, because the field is a part (or all) of the index the value of the field is incorporated into the query.

Please note this query has been cleaned of the stored procedure setup code to illustrate the scenario.

Browse & Non Indexed Fields

Using Browse on non-indexed fields will result in Sage300 omitting them from the resulting query and instead using its own filtering mechanism executed client side in the Sage300 view code.

For example, a search of A/R Customer records using only the customer name will result in returning every single customer record back to Sage300 where each record is evaluated according to the Browse before the resultant dataset is return to the calling client.

TIP: We recommend to search using indexes where ever possible, or use the CS0120 view to execute a query directly on the database (where the database can perform more efficient filtering).

Fetch is not GoTop

As mentioned previously there are several functions (GoTop, GoNext & GoPrevious) which work in conjunction with Browse. The scant inline help provided in VBA & VB6 seems to indicate the Fetch and GoTop functions act equivalently on the fetch of the first record after a Browse.

They do not!

Whereas Fetch always incorporates index field values, GoTop behaves erratically where it includes the values from the Key fields most of the time and then omits values from the key fields at other obscure times (see below). Using the same Customer query as previously the resulting query using GoTop instead of fetch results in the following SQL:

TIP: We don’t have one… why does this happen?

Composed views and searching Child Views

Thus far the focus has been on searching either views not composed or the parent view of a composed view. There are some gotchas when querying composed views!

A view composition is Sage300’s implementation of record relationship and association similar to an Object Relational system such as Hibernate. A view within a composition defines the relationship to one or more other views and in turn how that view is related to other views, etc.

Iterating through a Child View

Once a set of views are composed and a record is retrieved on the parent view, all child records associated to the parent record and only those child records will be available to iterate.

The following example retrieves an O/E Order and Fetch iterates through each of the records of that particular order.

SQL Trace

The SQL trace shows something equally elegant, where on the Fetch operation of the Order Detail View simply retrieve every record for the Order.

It is worth noting that each subsequent call to Fetch does not result in a corresponding SQL query, so Sage300 must hold an in-memory cache of the resulting dataset in this instance.

Searching the Child View

To search for a record(s) using the Browse operation simply requires a simple query and because the records being searched are only those related to the parent record it is not necessary to specify order header fields such as ORDUNIQ. For example, to search for all items within an order containing the character ‘3’.

Beware the Dangling Reference!

If we attempted to re-iterate the same set of detail records using the same Browse operation the result would iterate only the last record of the previous iteration and the SQL Trace would show something odd also:

The query includes a filter to restrict to a single record where the LINENUM field is 256!

This is because the LINENUM field at the time the Fetch is set to the last value of the iteration (in this example it was 256) and Browse incorporates any values set to the index fields.

To reset the iteration, either set any key fields back to their default values or use GoTop (which ignores key field values).

Killing the View Composition

Up to this point the examples provided have used the default (0) index on the Child View. It is possible to set the Order property to use a different index on a composed child view.

It is important to note the view composition will hold only when the index’ first field(s) is the primary key of the parent view’s primary key/default index. Using an index where the Parent View’s primary key is not the first field in the view being searched will result in the query returning records not associated with the parent record.

For example, searching using indexes 0, 1, & 6 of the O/E Order Detail view would return only records of the composed order, whereas indexes 2, 3, 4 & 5 will not.

The following example searches a composed O/E OrderDetail view using index 2.

Generates the following SQL:

Irrespective of whether Browse or GoTop are used the resulting SQL generated by Sage300 is based on the order in which the fields of the index are listed. The 1st and 2nd parts of the where clause include the ORDUNIQ field and since the ORDUNIQ field is the 3rd field within the index the 3rd & 4th sub-clauses do not include ORDUNIQ, thus returning non-associated records.

TIP: Note this!

GoTop Revisited

In some Child Views on certain indexes we’ve found GoTop acts like Fetch and includes values from key fields in the query.

If for example, we search the O/E Order Detail View using index 6, the GoTop operation on second iteration includes the ITEM field value in the query (if it’s not explicitly cleared prior to GoTop).

The following Trace shows the SQL generated on second iteration of the above scenario.

FilterSelect & FilterFetch

From our tests FilterSelect is shorthand for setting the Order property and the Browse operation, and FilterFetch is simply the corresponding fetch operation.

Conclusion

The trigger for this post was to educate ourselves on the different search methods available in Sage300 and their underlying functionality.

Our investigation has certainly educated us on some of the gotchas and caveats, which I know for certain, have tripped me up at one point or another. Internally we’ve also found a number of places where we can make existing searches more efficient, particularly those where we search for details within an existing transaction.

For example, prior to this post we were using a FilterSelect & FilterBrowse to locate a detail record within a transaction.

We could re-engineer this to use GoTop and a manual record evaluation thereby avoiding the re-querying of the database on each line.

As always, we welcome your comments.

As we do more Magento integration work, we thought we’d put together a list of tips, tricks and the various issues we’ve faced.

The API

Use the API! Others may disagree, but working with the API is far easier than ‘integrating’ at the database level. There are many reasons to use the API:

  • The database schema is an awful Entity-Attribute-Value (EAV) design, resulting in rather awful queries and transaction statements.
  • Magento Extensions hooking onto events within the API will work. For example, sending Sales/Transactional emails when an order is changed to a particular status (see Workflow below).
  • Better security, since you’re not opening the database to the internet.

There are a few points to be aware of:

  • Slowness: An issue common to all is the lack of performance of the API. For normal day-to-day integration this is not an issue, but be careful of master data takeon for items, inventory and customers using the API. For large amounts of data it may be better to use the Import/Export functions.
  • If you’re planning a new integration you should use REST over the Xml-RPC.
  • Be careful of product differentiation between the Community and Enterprise editions. Some fields such as ‘Gift Comment’ are not available through the API in Community edition.
  • Shipping data is located in the Order Header of the Magento schema. Accounting/ERP typically record shipping data as an item line, so you will need to transform the Magento order data accordingly.

Configurable Products

One of Magento’s key strengths is the ability to handle configurable products.

  • Because there are two ways of handling configurable products, there are different methods within the Magento API to access the configurable product information. These resources need to be queried in order to get pricing, description and quantity if you plan to download this data as part of an order import into your Accounting/ERP system.
Configurable Products #1
  • You will need to address how configurable products will be handled/mapped to the items in the ERP/Accounting system. Consideration needs to be given to how an order line is to be imported i.e. just the primary item, or all items, and for the same reasons how inventory will be uploaded from the accounting system to Magento.
Configurable Products #2

Master Data Considerations

  • Since most ERP/Accounting systems don’t have facility to store the extended data required for eCommerce, you will most often have to dual manage item master setup in both the ERP system and Magento.
  • Conversely, pricelist management is probably best performed within the ERP/Accounting system and uploaded to Magento.

Magento Extensions

One of the great things about Magento is the breadth and depth of third party add-ons or extensions.

  • Be watchful of extensions which modify the Magento data schema such as Aitoc’s multi-location and Innoext’s advanced pricing as you will need to integrate with the custom resources/interfaces to upload and download data from Magento.

Workflow & Order Status

Another of the useful parts of functionality within Magento is the ability to control order workflow through the use of Order Statuses.

  • One of the great extensions for Magento is the Amasty Order Status extension which allows you to send transactional emails against any order status.

Conclusion

I am sure there are probably others, so we’d be interested to hear other people challenges or tricks.

Recently we encountered a support call from a client where processing a large amount of Sales Ledger Invoices, i.e. > 2000 in a single batch, would encounter an OutOfMemoryException error. Given the volume of transactions, we calculated each transaction or invoice to be leaking or not releasing 700Kb of memory.

To replicate the issue, we set up the same import on our test environment and found we were encountering the same problem i.e. memory usage would grow and grow, until memory for the process was exhausted and the OutOfMemoryException error occurred.

I searched the Sage200 developer forum to see if anyone had encountered this issue; no-one had. I did however find, on a completely unrelated post, someone was forcing a full garbage collection using GC.Collect to release memory.

Nowhere in our IMan application have we had to explicitly manage memory in this way, but as we had nothing else we thought we’d at least try it out.

To make this efficient, we decided to collect every 200 invoices.

We re-ran the import, and voila, the memory issue disappeared, or more correctly memory usage would increase until GC Collect was called where then usage fell and available memory in windows would increase.

Whilst we had the problem solved, the solution had a foul stench about it, and we needed to the find the cause, whether it was our code or a Sage problem.

The first thing we did was to use Performance Monitor to query the various .net Memory performance counters for our application.

In the version of software which had the memory issue, two things stood out:

  • ‘Gen2 Heap Size’ was massive; basically all memory was sitting in Gen2 waiting to be collected.
  • ‘% of Time in GC’ was increasing over time until it spiked to 100% and the application subsequently crashed.
Performance Monitor With Memory Issue

With the modified code using GC.Collect:

  • Gen2 heap was reducing on each call.
  • ‘% of Time in GC’ was remained relatively flat except for the spikes when we called GC.Collect.
Performance Monitor Calling GC Collect

What this was informing us was that we weren’t leaking memory, but there were a lot of objects accumulating in Gen2 waiting to be collected by the GC.

CLR Profiler

The next step was to identify which, or whose objects these were: our objects or Sage’s.

Whilst there are a lot of memory profilers available, the free CLR Profiler from Microsoft is actually pretty good. It provides a whole range of analysis but importantly it allows you to profile the .net garbage heap.

Using the Profiler, we decided to profile a batch of 200 invoices. After running we could immediately see a massive Gen2 Heap Size for a comparatively small number of invoices.

CLR Profiler Heap Size

To find out which objects were sitting in the Gen2 heap, we viewed the Heap Graph.

Changing the detail to a relatively course mode, we could immediately see the majority of memory being used was indeed Sage objects waiting for their Finalizer to be run.

CLR Profiler Heap Graph

The Sage Response

We logged the issue with Sage after a little more testing to ensure it wasn’t our application and its running environment (under the hood we’re reasonably complex so I wanted to ensure the issue wasn’t related to that). Their response:

We’ve had to do the same thing for batch processes inside Sage 200. Seems to be that Finalisers get called only if the system is idle, and this never happens in the normal scheme of things, unless you force garbage collection.

So Sage have exactly the same issue for some processes internal to Sage200. We’ve subsequently asked where the issue lay to which we’re awaiting a response.

General Issue

After further tests, we identified this issue applies to not only S/L Invoices but every Sage object inheriting from Sage.Common.DataAccess.BusinessObject i.e. any object used for data access in the Sage database.

Conclusion

We believe this is a fundamental bug/problem within the core of the Sage200 library, which needs resolving by R&D.

However, if you are processing large batches of any data using the Sage200 API, we highly recommend you implement a garbage collection mechanism to prevent ugly memory related errors.

Our issues show if you instantiate a Sage object in some sort of batch/continuous running process you will suffer a ‘memory leak’ and that object won’t be collected until you force collection.

Introduction

Integrating any Warehouse Management System (WMS) or Third Party Logistics (3PL) with an ERP product such as Sage300, Sage200 or X3 is a complex task. Consideration needs to be given to a variety of issues such as:

  • Synchronisation of stock systems
  • Integration touch points
  • Volume & frequency

This blog post draws on our experience of integrating a number of WMS systems with ERP products, and while not necessarily offering any definitive answers, hopes to highlight the potential questions and issues affecting most WMS integrations.

Stock system

Unless a WMS has been purpose built to utilise the ERP’s stock system, most WMSs maintain their own stock system, each affected by transactions. The transactional nature of the interaction means the two systems will have natural and unnatural differences brought about by the latency in processing transactions and processing errors.

The integrator’s job is to make the ERP and WMS stock systems as synchronised as possible, or at least highlight the possible points of difference.

ERP WMS Stock

Logical Partitioning

Segmenting a warehouse (or ERP) into areas such as the receipt dock, quarantine and pick face makes perfect sense in both the WMS and the ERP. There are two key integration questions which need answering:

  • Mapping: how do the locations in the ERP and WMS map to each? How does this mapping affect the stock position and availability of stock in the ERP?
  • Stock movements: when stock is moved in the WMS from one logical area to another, does the movement also need to be reflected on the ERP? Typically, ERPs treat a location as a boundary for sales and purchase order purposes, thus a movement from quarantine to the pick face will need to trigger a corresponding movement in the ERP.

Time sensitive

  • Transactions should be processed in the order they were processed on the WMS. If batching groups of transactions for processing, an alternative is to process all the transactions that add stock to the ERP system before those that decrement stock which limits the scenario where a transaction removes stock required by another operation.
  • Transactions such as P/O receipts can affect the available to sell value in your ERP, and thus affecting potential sales.

Decide the integration points

The obvious candidates for integration are:

  • Item Master download to the WMS.
  • Sales Order download, followed by the upload of picking information creating a shipment/invoice in the ERP. Consideration should also be given to the way back orders will be handled.
  • Purchase Order download, followed by upload of receipt confirmation creating a purchase receipt in the ERP.
  • Adjustment upload from the WMS to the ERP. Adjustments can take several from write-offs of damaged or end-of-life stock to adjustments made through cycle counts. Be prepared for some heavy debate around the ‘should cycle count adjustments be immediately uploaded/treated as gospel’ question. (They should, but some warehouse managers/business owners seem to have other ideas).

Less obvious or more exotic transactions need some thought:

  • Transfers: if there are multiple warehouse(s) or the warehouse is partitioned (see logical locations above) a corresponding transfer in the ERP solution will need to be performed.
  • Customer and supplier returns: typically these are handled manually as the volume and workflow complexity usually means the cost for integration outweighs the benefit.
  • Kitting: kitting, or light manufacturing, can involve workflow, and the frequency may be too low to warrant integration.
  • Packaging operations: anything to do with packaging is usually met with a degree of complexity. The purchasing process involves a PO, receiving is done typically at the warehouse door, but thereafter it becomes complex: the receipted goods are consumables, so they cannot be booked into stock, and because of this stock control and management can become more difficult.
WMS Transaction Flow

Locking and recall

Consideration may be given to both the workflow of the WMS and ERP products. If for example a sales order starts to be picked in the WMS, does the change in status also need to affect the ERP preventing it from being modified?

Similar considerations need to be given to modifications made to, or deletions of sales and purchase orders in the ERP.

Inventory transactions

Inventory transactions by their very nature are more difficult to handle.

  • Stock is sensitive: invalid stock levels can trigger exceptions, stock costs particularly for adjustments need to be considered.
  • Shipments and receipt transactions need to be mapped back to their original order documents. That is, when a line is picked, the same line on the sales order needs to be affected.

Volume

Due to the volume and different types of the transactions, the integration must work! Attempting to unpick issues when things go wrong is time consuming, costly and sometimes impossible.

Batch and serial numbers

Batch and serial number integration adds complexity as they increase the number of reconciliation points between the WMS and the ERP solution, where each batch or serial number in the WMS must also be present in the ERP. For example, a warehouse managing 200 batch controlled items may be made of a 1000 discrete batches.

There are some variants on batch and serial number processing which can reduce complexity:

  • Record batches and serials only on the WMS.
  • Record only batches and serials only on some operations i.e. picking or goods out.

Methods to integrate

As described in a previous post, we recommend staging tables for data exchange. However, when integrating a 3PL, your method for exchange will be file exchange via FTP/SSH.

When using a staging table, you have the option of using a single table for all transactions or a table for each transaction type. A single table does introduce some degree of abstraction (each field needs to be named FIELD1, FIELD2, etc. which is then logically mapped depending on the transaction type), but it has the major advantage that it can be reconciled more easily and transactions can be easily ordered by date/time.

Reconciliation

The ability to reconcile the ERP and WMS stock systems is paramount. We would recommend creating a report which can be run daily, at a time when no-one is using either the WMS or the ERP, comparing the WMS & ERP stock levels.

Test, test, test

It goes without saying that all integrations should be tested, but WMS integrations more so. The number of transactions, the time sensitive nature, and the variety of transactions makes it a difficult beast. Try to cover the following when testing:

  • Performance: try to mimic the load and volume of transactions performed throughout the day.
  • Concurrency and multi-user environment: test for scenarios where transactions may be locked by the ERP or the WMS.
  • Check your audit points and controls: if you need to unpick an error, ensure you’re capturing all the necessary data from the outset.

Conclusion

Integrating any WMS is not simple. Consideration needs to be given to a wide variety of issues and integration points where cost is often a factor. With careful planning however, it can be very rewarding: seeing all the transactions tick through is quite nice!

Introduction

Last week there was a post on the Sage300/Accpac VBA forum asking how to automate the Reversals function in Bank Services. In short the responses were: macro recording didn’t work; the view was clumsy and difficult; and the best method to automate would be to mimic the entry via SendKeys.

Given that macro recording is not supported by the bank reversals function, it is necessary to use RVSpy to understand the view calls. This post will describe the various tips and tricks we use when we have to use RVSpy, using the reversals example as per the forum post.

A copy of the final source code and the RVSpy log can be found at the bottom of the post.

Filter the view calls

By default RVSPY captures all calls which leads to an extremely verbose and sometimes incomprehensible log file. Filtering the view calls to remove all the metadata and get requests will significantly trim the log file to the calls which are most important.

  • Filter view calls: Fields, Field, Attribs, Type, Get, Name, Presents, FieldName & FieldExists
    RVSpy FilterCalls
  • Turn off ‘Show Nesting’: Typically you are only interested in calls the UI makes, and not calls views made to other views.
    RVSpy DisableNesting

In our example the size of the generated log went from 1.8MB unfiltered to 427KB filtered.

Recording the entry

Finder Fields

Try to enter the values into each field. Using the finder will significantly add to the log and can make it difficult to understand (see below, for exceptions).

Use comments

Before you do something major such as posting, or adding an entry, insert a comment into the log. It will help you later on to identify the event in the log file and will also provide you with a cutoff where major processing is performed by the view.

RVSpy Comments

The oogily-boogily nature of the Bank Reversals View

If you look at the Bank Reversals view, you can easily understand the trepidation as it appears to have several purposes:

  • As a history, where posted reversals can be queried and retrieved.
  • The screen supports the ability to select multiple documents to reverse, so there are a range of fields to support the selection process.
  • Posting of reversals where there are several fields with conflicting names such as BANK/PBANK and SRCEAPP/PSRCEAPP.

The other major issue is around the view protocol: ‘Flat, Ordered Revisions’.

  • Flat Protocol: Flat is simply a list or table of records with no hierarchical element; a lot of the setup views such as Account Sets are flat.
  • Ordered Revision: Ordered Revisions means that an internal list of changes are maintained in memory until a Process or Post command is given. In our example an Insert is performed prior to the Process. If processing multiple reversals, each reversal would be Inserted before the Process performs the reversals as a batch operation.

Understanding Your Log File

After recording is complete, you need to now understand the log file.

In our example the two major issues were the fields we needed to set, and the calls required to post the reversal.

To understand the fields which need to be set, search for ‘Put’. In our example the first put is to PBANK to set the bank code. One thing to note is that RVSpy uses field numbers as opposed to names, so you will need to translate these into names using the Accpac Object Model.

RVSpy BankPut

Continue searching and you will find Puts to PAYORID, TRANDATE, and REASON fields.

Next you will find Puts to two key fields SERIAL and LINENO.

RVSpy SerialLinePut

These two fields were a little confusing at first but we know they had to do with the transaction being reversed. In both recordings we made we actually used the finder to select the transaction we were to reverse. Prior to the two Put statements we could see the finder searching BKTRANH & BKTRAND, thus the two values corresponded to the keys in these tables.

The final step was to search for the comment we made before we posted the reversal.

RVSpy Insert

Three actions were performed:

  • Inserting the record into the view.
  • Switching the view mode to ‘Process Reversal Selections’.
  • Issuing Process to process the reversal.

Finally

It may be necessary to record something more than once. You may enter something incorrectly, or you may need to repeat an action to understand each call; but with effective use, RVSpy is the best or only tool to understand those Sage300 views/screens which do not support macro recording.

Download Bank Services Reversals source code

Contact

Realisable Software Ltd provides code-free, cost-effective applications integration solutions for SMEs. Our core IMan product is designed to integrate almost any application with a number of Sage solutions and online payment processors.

Looking to purchase IMan, please see our resellers here.

Realisable Software
Ph: +44 (0) 208 123 1017

Copyright © Realisable. All rights reserved.
Realisable is a registered trademark

Close

Request Demo

Realisable Software Ltd provides code-free, cost-effective applications integration solutions for SMEs. Our core IMan product is designed to integrate almost any application with a number of Sage solutions and online payment processors.

Looking to purchase IMan, please see our resellers here.

Realisable Software
Ph: +44 (0) 208 123 1017

Copyright © Realisable. All rights reserved.
Realisable is a registered trademark

Close

Access Downloads

Realisable Software Ltd provides code-free, cost-effective applications integration solutions for SMEs. Our core IMan product is designed to integrate almost any application with a number of Sage solutions and online payment processors.

Looking to purchase IMan, please see our resellers here.

Realisable Software
Ph: +44 (0) 208 123 1017

Copyright © Realisable. All rights reserved.
Realisable is a registered trademark

Close