Sage integration blog | Blog | Realisable

3 Tips to a successful integration with Sage

Blogroll, data Integration on May 9th, 2016 No Comments

by David Weaver, Marketing Manager, Oakley ERPIntegration Iman - Lightbulb moment

No doubt you are looking over the Realisable IMan website as you have identified a pain point between your business software applications. The reasons are numerous, but the motive is likely to bring back to one familiar problem…lack of integration between your website, industry solution and Sage accounting software.

You are not alone and in actual fact at Oakley ERP we have successfully paired IMan integration with Sage with over 60% of our customer base.

Whether this is connecting a bespoke software application or a more recognised solution such as Concur Expenses we have successfully merged our development skills and experience of Sage with our strong partnership with Realisable.

Take advantage of our experience and take a look over 3 tips to gaining a smooth integration with Sage:

1. A Clear Understanding Of What You Want To Achieve

All too often it can become a blanket statement with the buzzwords of automation and integration. However it is more important to understand what you would gain from undertaking such a project. While it is all very well syncing data from one application to another, but what does the data sync actually achieve. For example you might be looking to free up staff that would have to manually key in the data which is not just time consuming but also prone to human error.

With a few calculations you will soon be able to start working out potential ROI from an integration project and of course while the numbers speak, don’t discount the emotional benefits of providing staff with better workflows.

2. Gain A Closer Understanding Of The Value To Assign To The Project

While an unfamiliar and custom integration project can take a heavy dip into the business budget IMan can change this perception, as the initial integration tool to Sage is built and requires a custom development around it to integrate with your application. While there can be bigger integration projects, generally this can make projects take a lot less time than you would think. The benefit of working with an experienced IMan partner is that having worked on a number of integration projects the estimates are likely to be close to the final figure as learning curves and similar projects can be called upon for reference.

At Oakley through our experience and expertise we are able to provide estimates for IMan integration projects with around 98% coming in as quoted.

3. Ensure You Appoint A Project Leader

If you are considering an Integration project with your Sage solution then it is vital to know that you have a go to contact appointed from your side to the project. They need to champion the project and help make the integration side of it work as smoothly as possible. There will be a lot of communication between both parties and having the assigned contact can really make the difference to a quicker and easier project completion and bringing it on budget.

 

About the Author

David Weaver works as Marketing Manager at Oakley Global, a leading Sage Business Partner and IMan Business Partner. Oakley have delivered many successful Sage projects and routinely include IMan development in the majority of proposals. Discover more about our leading Sage solutions Sage 200 and Sage 300.

by Fiona Skinner

The Path to Killer Help Documentation: Our Journey

Blogroll on April 6th, 2016 No Comments

I began my Realisable adventure 20 months ago when James keenly saw the chance to utilise my scientific background and “fresh eyes” to analyse the help documentation for IMan. Flaws in the original documentation needed attention as they led to missed sales, frustrated users, and all too many paltry support tickets. The logic for dragging me into it was that someone who has little tech experience would be similar to one of our clients trying to access the help and use the software. If it makes sense to me, there’s a better chance that it’ll make sense to our users; if our clients have an easier time learning to use our software, it will mean that our developers can spend less time answering support queries and more time creating better data solutions, answering more complex problems and ultimately improving our service.

Along with analysing the content of the help, there was also the question of format. James wanted to somehow convert the documents from the impenetrable PDF to an accessible, online, searchable resource. Our solution was to use a market-leading one-source publishing platform called MadCap Flare. It was with this product that I spent months of my life, learning HTML and CSS, and eventually converting our key user guides into the form that they are today.

On my first review of the user guide, my most common annotation was “Why?”. In response to almost every one of these, James told me to go and try it; whatever ‘it’ was. With an internal sigh, I went off to struggle with setting up email report groups, creating XML files,translating fields, understanding what an ‘expando’ field was, and to tackle my nemesis: hierarchical data keys. I got completely lost in the software; the reason I got lost was primarily down to confusion while interpreting techie explanations & instructions (or lack thereof!). It was hard work, but I now have a thorough understanding of the software, and have successfully created documentation that makes sense to me, which will hopefully help others avoid the struggle I went through.

We hope it will ease the learning curve, leading to less frustrated conversations and consequently a better experience.” – James Hutchison.

Since I was simultaneously learning how to use the software, we were also finding bugs that had snuck through product testing! From the beginning, James saw the opportunity for a front-to-back software development approach, and his method clearly pays off, as my review process also analyses the software itself. I’ve thoroughly reviewed the basic functionality of the software; as a result bugs that were previously missed have been discovered, and we find increasingly more ways in which the whole package can be improved. Some bugs were easily fixed, but other adjustments are a part of the next version release of IMan and will result in an even stronger product!

The online help has been live since July 2015, and I continue to refine it.

If you have any experience you’d like to share with us about help usability and back-to-front software development, we’d love to hear about it!

find us on twitterFind us on Twitter

get in touch on LinkedInor LinkedIn

Tackling Magento Integration

Blogroll on March 7th, 2016 No Comments

Over the years we’ve integrated a large number of Magento sites with different Sage products hitting numerous issues along the way. Here we discuss the most common of these and how we’ve overcome them either in the Magento connector (developed by our partner, EC Internet) or through tweaking settings within Magento.

Slow API

Anyone who has had to get data into or out of Magento via its API will know about the dreadful performance. For transactional data (Orders, Shipments, etc) where the quantities are smaller this is usually ok, and with server infrastructure permitting, performance can usually be tuned to an acceptable point.

However, for master data such as product or customer records, where there are larger volumes and where there’s a need for frequent updates, the API is not only too slow, but can also result in database locking issues, update failures and the like.

To remedy this issue it is necessary to bulk update the data i.e. to execute the update server side, within Magento itself. Magento does have an import/export function, but is not conducive to a 24/7 eCommerce operation since it requires manual intervention and a specific CSV file format.

Our Magento connector addresses these by:

  1. incorporating bulk/batch upload and download functionality for customer & product data. For example, bulk update functionality has reduced stock updates of around 2000 products from 3hours to under 5 minutes.
  2. building the bulk functionality into the standard IMan integration flow, not only does this result in a configurable, easy to use mapping, but any errors are correctly reported onto the standard IMan audit report; enabling users to make any remedial action, easily and quickly.

Performance and memory usage

The poor performance of the Magento API is well known, but queries which are a result of API interactions can easily run afoul of memory issues (even the smallest ones). Settings to watch out for in your php setup are, memory_limit and max_execution_time and MySQL’s query buffer size innodb_buffer_pool_size. If you’re running into issues where API calls failing, altering these values may help.

We will be following up next month with a more comprehensive guide and tuning reference, so stay tuned.

Field Lengths

A common issue with any eCommerce/ERP integration are mismatches in field lengths; address fields are particularly sensitive. Any Magento implementation should review these, and how the fields will map to Sage, otherwise there’s a chance of miss-ships.

Fortunately, our connector has an inbuilt remedy to match address field lengths to the integrated Sage ERP solution.

Magento Extensions and their challenges to integration

One of the key benefits of Magento is the breadth and depth of the extensions (plugins) which can enhance Magento’s functionality in every perceivable way.

These extensions however can present a challenge to integration as a number of them will add and enrich data on any Magento object and this enriched data often needs to make its way into the ERP.

As Magento (pre version 2) lacks a descriptor or schema for its object model we can’t know ahead of time what fields are available. A means to dynamically interrogate the Magento objects to know what data is available so the necessary data can be passed through to your Sage accounting or ERP solution.

Again, our Magento connector does this transparently where product, order & customer attributes, and additional payment details are all automatically recognised, and the data can be mapped/integrated, based on requirements.

There are improvements to be made and our solution isn’t perfect. The existing Magento API doesn’t expose every piece of data, some of it squirreled away, with no API access. As part of our next release we’ll be implementing a feature which will allow lookups & translations directly against the Magento database (via a call to API), meaning every field can be gotten to.

Where do we go from here?

If you have any sticky issues or feature requests or just want to discuss the Magento connector in general please visit our forums:

https://www.realisable.co.uk/support/forum

Advanced Sage300 Development – Unpicking the Payroll Tax Employee Views – Resource IDs

Sage300 on February 23rd, 2016 No Comments
We have just completed some work to add the ability to create/update Employee records (both Canadian & US) in Sage300 (to be released formally shortly).

One of the gotchas we ran across is the updating of the Employee Tax screen.

EmplyeeAdditionalTax

Recording a macro didn’t really help as the mechanism the UI uses to identify each row is the PARMSEQ field, which is an incrementing value representing the row number of the table.

For an automated import this is quite brittle as an additional entry could potentially reorder the rows causing the incorrect row to be updated.

When viewing the table we found an integer value (PARMNBR) was being stored, which we could only assume was some sort of reference to the record.

CP0062

The PARMNMBR would be fine to use as the key to update, but you would need to know it’s value and this is a little concealed, ideally we’re looking to use the description as the key to update these values.

Investigation 1 (Fail)

Hunting around the tables….no tables would be found with this info.

Investigation 2 (RVPSPY)

We opened our friend, RVSPY with nesting turned on and found the screen was calling CT0002.

RVSpyCT002

We then tried to querying the table, per the Object Model, but the table doesn’t exist.

So we tried querying the views. Success, it returned the values, but no unfortunately no description field.

CT0002

This makes sense since all descriptions are stored in resource files, so the PARMNBR was actually the Resource ID.

To get the description we needed to using the RscGetString method to retrieve them (note that we’re querying the CT module and not the CP application.

I hope this helps out others out there!

Tags: , ,

Uncovering Read, Browse & FilterSelect Statements in Sage300

Uncategorised on September 30th, 2013 4 Comments
In the last couple of weeks we’ve been doing quite a bit of work to our Sage300 connector to add update functionality to existing transactions.

When working in a non-UI, automated process, one of the key issues when updating a transaction is the ability to locate a particular line (or child record) within a transaction; so that it can be updated, deleted or a new one inserted should one not exist.

We’ve had this sort of functionality within our software for quite some time (to locate particular lines when working with O/E Shipments, Invoices, P/O Receipts, etc.), but we needed to revisit our line search method to implement the new update functionality. Given the different methods to search for transactions and records in Sage300, we did more than think about whether we were using the best line search strategy.

This blog attempts to describe our investigations and those various search methods available within Sage300 to search for transactions, and to explore what each of these actually do under the hood.

Order Property & Indexes

One of the dysfunctions of Sage300 is that index use is explicit. Whenever Sage300 executes a query, rather than letting the database devise which index to use, the query generated by Sage300 expressly states which index the database must use. I can only blame Sage300’s early days of running on Btrieve for this, but the result is that you must know how the index will affect the underlying query.

To alter the index is simply a matter of altering the Order property of the view.

One of the key discoveries of our research is that the Order property of a view can drastically affect the functionality of the search operation.

Cursor Accessed Indexes

It is worthy to note that Sage300 employs two strategies for retrieving records: simple and cursor accessor.

Simple indexes simply specify a where clause with an SQL statement.

Cursor accessed indexes use a stored procedure (with a server side cursor for positioning) to setup the query and subsequent calls to the stored procedure to move the cursor and return the next record. The code below illustrates the initial setup of a cursor based index.

Please note the explicit field list has been removed and replaced with * for brevity.

Sage300 then calls the stored procedure to return the next record from the query:

Whilst our testing indicates the index type is transparent to calling operation, the speed of cursor accessed indexes is significantly slower than simple indexes because of the SQL call to retrieve the next record.

Our investigation was a little inconclusive, and we’d be happy to hear from others on this point; but the use of cursor accessed indexes appears in Sage300 to be dynamic:

  • A stored procedure/cursor would be used when an index was specified but the search did not include any index fields. This would only occur on non-composed views or the parent view of a composed view.
  • Certain indexes always use cursors composed or otherwise. The API doesn’t provide any means to query if an index is cursor accessed or simple.

Read

The first access method is the most simple: the Read function is designed to return the single record pointed to by the values of the fields of the corresponding Order or index. The following code returns a single O/E Order, using the Order Number:

SQL Trace

If we look at the SQL trace for the Read method, not only does it select just a single record with the top 1 directive; it also provides a hint to use the index corresponding to the Order selected.

DBSpy

DBSpy issues a similarly short statement involving a single Get operation.

In terms of database access and speed, Read is certainly the most efficient since it only ever retrieves a single record based on an index.

Using Read to iterate multiple records

Read will only ever return the first matching record based off the values set to the fields used in the index. To iterate multiple records therefore it is not really a viable solution as it would be necessary to first set the key field values and then perform a read, and repeat.

Browse & Fetch

The Browse and Fetch operations combine to allow multiple records to be retrieved and iterated, and in contrast to the Read operation they allow you to search for records using non-indexed fields.

The Browse operation also allows you to use the GoTop, GoNext & GoPrevious functions (see following section).

Specifying the Index of a Browse Operation

Like the Read function, the index used changes the way Browse and Fetch retrieve records. In the example below we’re retrieving the Customer record using the Customer’s Name, changing the index to 2 and specifying the single field in the ‘Put’ method used of that index.

SQL Trace

One would expect from the above the Browse operation would override any field set previously, however it does not. Instead, because the field is a part (or all) of the index the value of the field is incorporated into the query.

Please note this query has been cleaned of the stored procedure setup code to illustrate the scenario.

Browse & Non Indexed Fields

Using Browse on non-indexed fields will result in Sage300 omitting them from the resulting query and instead using its own filtering mechanism executed client side in the Sage300 view code.

For example, a search of A/R Customer records using only the customer name will result in returning every single customer record back to Sage300 where each record is evaluated according to the Browse before the resultant dataset is return to the calling client.

TIP: We recommend to search using indexes where ever possible, or use the CS0120 view to execute a query directly on the database (where the database can perform more efficient filtering).

Fetch is not GoTop

As mentioned previously there are several functions (GoTop, GoNext & GoPrevious) which work in conjunction with Browse. The scant inline help provided in VBA & VB6 seems to indicate the Fetch and GoTop functions act equivalently on the fetch of the first record after a Browse.

They do not!

Whereas Fetch always incorporates index field values, GoTop behaves erratically where it includes the values from the Key fields most of the time and then omits values from the key fields at other obscure times (see below). Using the same Customer query as previously the resulting query using GoTop instead of fetch results in the following SQL:

TIP: We don’t have one… why does this happen?

Composed views and searching Child Views

Thus far the focus has been on searching either views not composed or the parent view of a composed view. There are some gotchas when querying composed views!

A view composition is Sage300’s implementation of record relationship and association similar to an Object Relational system such as Hibernate. A view within a composition defines the relationship to one or more other views and in turn how that view is related to other views, etc.

Iterating through a Child View

Once a set of views are composed and a record is retrieved on the parent view, all child records associated to the parent record and only those child records will be available to iterate.
The following example retrieves an O/E Order and Fetch iterates through each of the records of that particular order.

SQL Trace

The SQL trace shows something equally elegant, where on the Fetch operation of the Order Detail View simply retrieve every record for the Order.

It is worth noting that each subsequent call to Fetch does not result in a corresponding SQL query, so Sage300 must hold an in-memory cache of the resulting dataset in this instance.

Searching the Child View

To search for a record(s) using the Browse operation simply requires a simple query and because the records being searched are only those related to the parent record it is not necessary to specify order header fields such as ORDUNIQ. For example, to search for all items within an order containing the character ‘3’.

Beware the Dangling Reference!

If we attempted to re-iterate the same set of detail records using the same Browse operation the result would iterate only the last record of the previous iteration and the SQL Trace would show something odd also:

The query includes a filter to restrict to a single record where the LINENUM field is 256!

This is because the LINENUM field at the time the Fetch is set to the last value of the iteration (in this example it was 256) and Browse incorporates any values set to the index fields.

To reset the iteration, either set any key fields back to their default values or use GoTop (which ignores key field values).

Killing the View Composition

Up to this point the examples provided have used the default (0) index on the Child View. It is possible to set the Order property to use a different index on a composed child view.

It is important to note the view composition will hold only when the index’ first field(s) is the primary key of the parent view’s primary key/default index. Using an index where the Parent View’s primary key is not the first field in the view being searched will result in the query returning records not associated with the parent record.

For example, searching using indexes 0, 1, & 6 of the O/E Order Detail view would return only records of the composed order, whereas indexes 2, 3, 4 & 5 will not.

The following example searches a composed O/E OrderDetail view using index 2.

Generates the following SQL:

Irrespective of whether Browse or GoTop are used the resulting SQL generated by Sage300 is based on the order in which the fields of the index are listed. The 1st and 2nd parts of the where clause include the ORDUNIQ field and since the ORDUNIQ field is the 3rd field within the index the 3rd & 4th sub-clauses do not include ORDUNIQ, thus returning non-associated records.

TIP: Note this!

GoTop Revisited

In some Child Views on certain indexes we’ve found GoTop acts like Fetch and includes values from key fields in the query.
If for example, we search the O/E Order Detail View using index 6, the GoTop operation on second iteration includes the ITEM field value in the query (if it’s not explicitly cleared prior to GoTop).
The following Trace shows the SQL generated on second iteration of the above scenario.

FilterSelect & FilterFetch

From our tests FilterSelect is shorthand for setting the Order property and the Browse operation, and FilterFetch is simply the corresponding fetch operation.

Conclusion

The trigger for this post was to educate ourselves on the different search methods available in Sage300 and their underlying functionality.

Our investigation has certainly educated us on some of the gotchas and caveats, which I know for certain, have tripped me up at one point or another. Internally we’ve also found a number of places where we can make existing searches more efficient, particularly those where we search for details within an existing transaction.

For example, prior to this post we were using a FilterSelect & FilterBrowse to locate a detail record within a transaction.

We could re-engineer this to use GoTop and a manual record evaluation thereby avoiding the re-querying of the database on each line.

As always, we welcome your comments.

Tags: , ,

Tips for successfully integrating Magento

Uncategorised on September 19th, 2013 No Comments
As we do more Magento integration work, we thought we’d put together a list of tips, tricks and the various issues we’ve faced.

The API

Use the API! Others may disagree, but working with the API is far easier than ‘integrating’ at the database level. There are three reasons to use the API:

  • The database schema is an awful Entity-Attribute-Value (EAV) design, resulting in rather awful queries and transaction statements.
  • Magento Extensions hooking onto events within the API will work. For example, sending Sales/Transactional emails when an order is changed to a particular status (see Workflow below).
  • Better security, since you’re not opening the database to the internet.

There are a few points to be aware of:

  • Slowness: An issue common to all is the lack of performance of the API. For normal day-to-day integration this is not an issue, but be careful of master data takeon for items, inventory and customers using the API. For large amounts of data it may be better to use the Import/Export functions.
  • If you’re planning a new integration you should use REST over the Xml-RPC.
  • Be careful of product differentiation between the Community and Enterprise editions. Some fields such as ‘Gift Comment’ are not available through the API in Community edition.
  • Shipping data is located in the Order Header of the Magento schema. Accounting/ERP typically record shipping data as an item line, so you will need to transform the Magento order data accordingly.

Configurable Products

One of Magento’s key strengths is the ability to handle configurable products.

  • Because there are two ways of handling configurable products, there are different methods within the Magento API to access the configurable product information. These resources need to be queried in order to get pricing, description and quantity if you plan to download this data as part of an order import into your Accounting/ERP system.

Configurable Products #1

  • You will need to address how configurable products will be handled/mapped to the items in the ERP/Accounting system. Consideration needs to be given to how an order line is to be imported i.e. just the primary item, or all items, and for the same reasons how inventory will be uploaded from the accounting system to Magento.

Configurable Products #2

Master Data Considerations

  • Since most ERP/Accounting systems don’t have facility to store the extended data required for eCommerce, you will most often have to dual manage item master setup in both the ERP system and Magento.
  • Conversely, pricelist management is probably best performed within the ERP/Accounting system and uploaded to Magento.

Magento Extensions

One of the great things about Magento is the breadth and depth of third party add-ons or extensions.

  • Be watchful of extensions which modify the Magento data schema such as Aitoc’s multi-location and Innoext’s advanced pricing as you will need to integrate with the custom resources/interfaces to upload and download data from Magento.

Workflow & Order Status

Another of the useful parts of functionality within Magento is the ability to control order workflow through the use of Order Statuses.

  • One of the great extensions for Magento is the Amasty Order Status extension which allows you to send transactional emails against any order status.

Conclusion

I am sure there are probably others, so we’d be interested to hear other people challenges or tricks.

Tags: ,

Sage200, Memory Leaks and the .net Garbage Collector

Uncategorised on September 12th, 2013 No Comments
Recently we encountered a support call from a client where processing a large amount of Sales Ledger Invoices, i.e. > 2000 in a single batch, would encounter an OutOfMemoryException error. Given the volume of transactions, we calculated each transaction or invoice to be leaking or not releasing 700Kb of memory.

To replicate the issue, we set up the same import on our test environment and found we were encountering the same problem i.e. memory usage would grow and grow, until memory for the process was exhausted and the OutOfMemoryException error occurred.

I searched the Sage200 developer forum to see if anyone had encountered this issue; no-one had. I did however find, on a completely unrelated post, someone was forcing a full garbage collection using GC.Collect to release memory.

Nowhere in our IMan application have we had to explicitly manage memory in this way, but as we had nothing else we thought we’d at least try it out.

To make this efficient, we decided to collect every 200 invoices.

We re-ran the import, and voila, the memory issue disappeared, or more correctly memory usage would increase until GC Collect was called where then usage fell and available memory in windows would increase.

Whilst we had the problem solved, the solution had a foul stench about it, and we needed to the find the cause, whether it was our code or a Sage problem.

The first thing we did was to use Performance Monitor to query the various .net Memory performance counters for our application.

In the version of software which had the memory issue, two things stood out:

  • ‘Gen2 Heap Size’ was massive; basically all memory was sitting in Gen2 waiting to be collected.
  • ‘% of Time in GC’ was increasing over time until it spiked to 100% and the application subsequently crashed.

Performance Monitor With Memory Issue

With the modified code using GC.Collect:

  • Gen2 heap was reducing on each call.
  • ‘% of Time in GC’ was remained relatively flat except for the spikes when we called GC.Collect.

Performance Monitor Calling GC Collect

What this was informing us was that we weren’t leaking memory, but there were a lot of objects accumulating in Gen2 waiting to be collected by the GC.

CLR Profiler

The next step was to identify which, or whose objects these were: our objects or Sage’s.

Whilst there are a lot of memory profilers available, the free CLR Profiler from Microsoft is actually pretty good. It provides a whole range of analysis but importantly it allows you to profile the .net garbage heap.

Using the Profiler, we decided to profile a batch of 200 invoices. After running we could immediately see a massive Gen2 Heap Size for a comparatively small number of invoices.

CLR Profiler Heap Size

To find out which objects were sitting in the Gen2 heap, we viewed the Heap Graph.

Changing the detail to a relatively course mode, we could immediately see the majority of memory being used was indeed Sage objects waiting for their Finalizer to be run.

CLR Profiler Heap Graph

The Sage Response

We logged the issue with Sage after a little more testing to ensure it wasn’t our application and its running environment (under the hood we’re reasonably complex so I wanted to ensure the issue wasn’t related to that). Their response:

“We’ve had to do the same thing for batch processes inside Sage 200.

Seems to be that Finalizers get called only if the system is idle, and this never happens in the normal scheme of things, unless you force garbage collection.”

So Sage have exactly the same issue for some processes internal to Sage200. We’ve subsequently asked where the issue lay to which we’re awaiting a response.

General Issue

After further tests, we identified this issue applies to not only S/L Invoices but every Sage object inheriting from Sage.Common.DataAccess.BusinessObject i.e. any object used for data access in the Sage database.

Conclusion

We believe this is a fundamental bug/problem within the core of the Sage200 library, which needs resolving by R&D.

However, if you are processing large batches of any data using the Sage200 API, we highly recommend you implement a garbage collection mechanism to prevent ugly memory related errors.

Our issues show if you instantiate a Sage object in some sort of batch/continuous running process you will suffer a ‘memory leak’ and that object won’t be collected until you force collection.

Tags: ,

Integrating WMSs or 3PL with your ERP

Uncategorised on February 25th, 2013 1 Comment

Introduction

Integrating any Warehouse Management System (WMS) or Third Party Logistics (3PL) with an ERP product such as Sage300, Sage200 or X3 is a complex task. Consideration needs to be given to a variety of issues such as:

  • Synchronisation of stock systems
  • Integration touch points
  • Volume & frequency

This blog post draws on our experience of integrating a number of WMS systems with ERP products, and while not necessarily offering any definitive answers, hopes to highlight the potential questions and issues affecting most WMS integrations.

Stock system

Unless a WMS has been purpose built to utilise the ERP’s stock system, most WMSs maintain their own stock system, each affected by transactions. The transactional nature of the interaction means the two systems will have natural and unnatural differences brought about by the latency in processing transactions and processing errors.

The integrator’s job is to make the ERP and WMS stock systems as synchronised as possible, or at least highlight the possible points of difference.

ERP WMS Stock

Logical partitioning

Segmenting a warehouse (or ERP) into areas such as the receipt dock, quarantine and pick face makes perfect sense in both the WMS and the ERP. There are two key integration questions which need answering:

  • Mapping: how do the locations in the ERP and WMS map to each? How does this mapping affect the stock position and availability of stock in the ERP?
  • Stock movements: when stock is moved in the WMS from one logical area to another, does the movement also need to be reflected on the ERP? Typically, ERPs treat a location as a boundary for sales and purchase order purposes, thus a movement from quarantine to the pick face will need to trigger a corresponding movement in the ERP.

Time sensitive

  • Transactions should be processed in the order they were processed on the WMS. If batching groups of transactions for processing, an alternative is to process all the transactions that add stock to the ERP system before those that decrement stock which limits the scenario where a transaction removes stock required by another operation.
  • Transactions such as P/O receipts can affect the available to sell value in your ERP, and thus affecting potential sales.

Decide the integration points

The obvious candidates for integration are:

  • Item Master download to the WMS.
  • Sales Order download, followed by the upload of picking information creating a shipment/invoice in the ERP. Consideration should also be given to the way back orders will be handled.
  • Purchase Order download, followed by upload of receipt confirmation creating a purchase receipt in the ERP.
  • Adjustment upload from the WMS to the ERP. Adjustments can take several from write-offs of damaged or end-of-life stock to adjustments made through cycle counts. Be prepared for some heavy debate around the ‘should cycle count adjustments be immediately uploaded/treated as gospel’ question. (They should, but some warehouse managers/business owners seem to have other ideas).

Less obvious or more exotic transactions need some thought:

  • Transfers: if there are multiple warehouse(s) or the warehouse is partitioned (see logical locations above) a corresponding transfer in the ERP solution will need to be performed.
  • Customer and supplier returns: typically these are handled manually as the volume and workflow complexity usually means the cost for integration outweighs the benefit.
  • Kitting: kitting, or light manufacturing, can involve workflow, and the frequency may be too low to warrant integration.
  • Packaging operations: anything to do with packaging is usually met with a degree of complexity. The purchasing process involves a PO, receiving is done typically at the warehouse door, but thereafter it becomes complex: the receipted goods are consumables, so they cannot be booked into stock, and because of this stock control and management can become more difficult.

WMS Transaction Flow

Locking and recall

Consideration may be given to both the workflow of the WMS and ERP products. If for example a sales order starts to be picked in the WMS, does the change in status also need to affect the ERP preventing it from being modified?

Similar considerations need to be given to modifications made to, or deletions of sales and purchase orders in the ERP.

Inventory transactions

Inventory transactions by their very nature are more difficult to handle.

  • Stock is sensitive: invalid stock levels can trigger exceptions, stock costs particularly for adjustments need to be considered.
  • Shipments and receipt transactions need to be mapped back to their original order documents. That is, when a line is picked, the same line on the sales order needs to be affected.

Volume

Due to the volume and different types of the transactions, the integration must work! Attempting to unpick issues when things go wrong is time consuming, costly and sometimes impossible.

Batch and serial numbers

Batch and serial number integration adds complexity as they increase the number of reconciliation points between the WMS and the ERP solution, where each batch or serial number in the WMS must also be present in the ERP. For example, a warehouse managing 200 batch controlled items may be made of a 1000 discrete batches.

There are some variants on batch and serial number processing which can reduce complexity:

  • Record batches and serials only on the WMS.
  • Record only batches and serials only on some operations i.e. picking or goods out.

Methods to integrate

As described in a previous post, we recommend staging tables for data exchange. However, when integrating a 3PL, your method for exchange will be file exchange via FTP/SSH.

When using a staging table, you have the option of using a single table for all transactions or a table for each transaction type. A single table does introduce some degree of abstraction (each field needs to be named FIELD1, FIELD2, etc. which is then logically mapped depending on the transaction type), but it has the major advantage that it can be reconciled more easily and transactions can be easily ordered by date/time.

Reconciliation

The ability to reconcile the ERP and WMS stock systems is paramount. We would recommend creating a report which can be run daily, at a time when no-one is using either the WMS or the ERP, comparing the WMS & ERP stock levels.

Test, test, test

It goes without saying that all integrations should be tested, but WMS integrations more so. The number of transactions, the time sensitive nature, and the variety of transactions makes it a difficult beast. Try to cover the following when testing:

  • Performance: try to mimic the load and volume of transactions performed throughout the day.
  • Concurrency and multi-user environment: test for scenarios where transactions may be locked by the ERP or the WMS.
  • Check your audit points and controls: if you need to unpick an error, ensure you’re capturing all the necessary data from the outset.

Conclusion

Integrating any WMS is not simple. Consideration needs to be given to a wide variety of issues and integration points where cost is often a factor. With careful planning however, it can be very rewarding: seeing all the transactions tick through is quite nice!

Tags: , ,

Effective use of RVSpy in Sage300 development

Uncategorised on February 19th, 2013 7 Comments

Introduction

Last week there was a post on the Sage300/Accpac VBA forum asking how to automate the Reversals function in Bank Services. In short the responses were: macro recording didn’t work; the view was clumsy and difficult; and the best method to automate would be to mimic the entry via SendKeys.

Given that macro recording is not supported by the bank reversals function, it is necessary to use RVSpy to understand the view calls. This post will describe the various tips and tricks we use when we have to use RVSpy, using the reversals example as per the forum post.

A copy of the final source code and the RVSpy log can be found at the bottom of the post.

Filter the view calls

By default RVSPY captures all calls which leads to an extremely verbose and sometimes incomprehensible log file. Filtering the view calls to remove all the metadata and get requests will significantly trim the log file to the calls which are most important.

  • Filter view calls: Fields, Field, Attribs, Type, Get, Name, Presents, FieldName & FieldExists
    RVSpy FilterCalls
  • Turn off ‘Show Nesting’: Typically you are only interested in calls the UI makes, and not calls views made to other views.
    RVSpy DisableNesting

In our example the size of the generated log went from 1.8MB unfiltered to 427KB filtered.

Recording the entry

Finder Fields

Try to enter the values into each field. Using the finder will significantly add to the log and can make it difficult to understand (see below, for exceptions).

Use comments

Before you do something major such as posting, or adding an entry, insert a comment into the log. It will help you later on to identify the event in the log file and will also provide you with a cutoff where major processing is performed by the view.

RVSpy Comments

The oogily-boogily nature of the Bank Reversals View

If you look at the Bank Reversals view, you can easily understand the trepidation as it appears to have several purposes:

  • As a history, where posted reversals can be queried and retrieved.
  • The screen supports the ability to select multiple documents to reverse, so there are a range of fields to support the selection process.
  • Posting of reversals where there are several fields with conflicting names such as BANK/PBANK and SRCEAPP/PSRCEAPP.

The other major issue is around the view protocol: ‘Flat, Ordered Revisions’.

  • Flat Protocol: Flat is simply a list or table of records with no hierarchical element; a lot of the setup views such as Account Sets are flat.
  • Ordered Revision: Ordered Revisions means that an internal list of changes are maintained in memory until a Process or Post command is given. In our example an Insert is performed prior to the Process. If processing multiple reversals, each reversal would be Inserted before the Process performs the reversals as a batch operation.

Understanding Your Log File

After recording is complete, you need to now understand the log file.

In our example the two major issues were the fields we needed to set, and the calls required to post the reversal.

To understand the fields which need to be set, search for ‘Put’. In our example the first put is to PBANK to set the bank code. One thing to note is that RVSpy uses field numbers as opposed to names, so you will need to translate these into names using the Accpac Object Model.

RVSpy BankPut

Continue searching and you will find Puts to PAYORID, TRANDATE, and REASON fields.

Next you will find Puts to two key fields SERIAL and LINENO.

RVSpy SerialLinePut

These two fields were a little confusing at first but we know they had to do with the transaction being reversed. In both recordings we made we actually used the finder to select the transaction we were to reverse. Prior to the two Put statements we could see the finder searching BKTRANH & BKTRAND, thus the two values corresponded to the keys in these tables.

The final step was to search for the comment we made before we posted the reversal.

RVSpy Insert

Three actions were performed:

  • Inserting the record into the view.
  • Switching the view mode to ‘Process Reversal Selections’.
  • Issuing Process to process the reversal.

Finally

It may be necessary to record something more than once. You may enter something incorrectly, or you may need to repeat an action to understand each call; but with effective use, RVSpy is the best or only tool to understand those Sage300 views/screens which do not support macro recording.

Download Bank Services Reversals source code

Tags: ,

Creating prepayments for O/E transactions in Sage300

Uncategorised on January 31st, 2013 No Comments
This blog uncovers the mystery of how to create A/R prepayments that are associated to their O/E documents and can be accessed via the drill down button on the A/R receipt entry screen. The logic described and presented in the blog applies equally to O/E orders, shipments and invoices.

Creating a prepayment with an O/E document is difficult for a few reasons:

  • Macro recording does not work; you need to use RVSPY to understand view calls.
  • Creating a prepayment requires manipulation of three separate, un-composed views and due to the lack of composition each view requires each field to be set individually.

There are three rough steps to creating a prepayment:

  • Create an A/R receipt batch
  • Create an O/E prepayment record.
  • Update the O/E transaction header view.

OE Prepayment Entry

Full source code is available for download at the bottom of this post.

A/R Receipt Batch

Firstly, it is necessary to either create a new A/R receipt batch or use a currently open batch. Creating the batch is relatively straight forward and really only requires you to set the bank account. Note that you do not need to compose each of the receipt views since you are not creating any receipt entries directly.

Update the O/E Prepayment View

The second step is to create the O/E prepayment record. The prepayment record is responsible for creating A/R receipt entry and it also associates the receipt to the O/E order so that there is a drill down against the prepayment.

Each O/E transaction (Order, Shipment & Invoice) has a separate prepayment view where each is identical with the exception of the view id. Therefore the logic used here is applicable to each.

If you are familiar with Sage300 development, most of the views are nicely programmed where a lot of hard work is performed behind the scenes by the view. Unfortunately, the prepayment views are the complete opposite! They do not have any default values either from the customer or order, so it is necessary to explicitly set each field.

The code sample below illustrates this, where it’s necessary to set the Customer, Customer Name & Customer Currency all individually despite setting the Customer Id.

Update O/E Header View

The final step is to update the prepayment fields on the O/E transaction header view, which update the Total Prepayments field on the order’s Totals Tab.

OE Order Header Prepayment

The attached code is a full working sample illustrating how to create and attach a prepayment and should work for Sage300 versions 5.6 and above.

Download Sage 300 OE Prepayment source code

Tags: ,