Samples arriving to the lab from various sources for testing will require management. Samples are typically given a unique ID for traceability. This ID is then used through the lifecycle of the sample. Additional details of the sample may also be recorded as part of sample management such as type of sample, time and date received, batch/lot No., batch size, source, testing required etc.
Where testing provides a support function to a manufacturing process, it will be important for your system to have functionality for product and specification management. This function will allow you to pre-populate information entry for batch testing. Specifications should allow you to enter a description, target values, upper and lower limits, and upper and lower warning limits.
Management of Test Methods should allow you to configure settings for inputs and outputs. Inputs are what you expect the analyst to enter, and outputs are what you would like the system to calculate. For example, if you have a test method that requires the average of two readings, you should be able to configure two inputs, and the output will hold an average calculation of those two inputs.
How samples are processed may vary from lab to lab, or a lab may process different sample types differently. Having customizable workflows allows you to configure the system to work with your needs rather than changing the way you work to meet a system’s functionality. Workflow management is the ability to define the steps required to complete the analysis of a sample.
From Enterprise Resource Planning (ERP) to Invoicing Systems, third-party software integration allows you to operate seamlessly with your LIMS. Similarly to equipment integration, data can be transferred between the two systems. In addition, it is common for uni-directional communication.
Configuring user accounts should be essential functionality for any system. This is the core function that controls system access and permissions. The ability to create new user accounts and remove and/or lock user accounts will also give you the most flexibility in controlling access.
Architecture is the fundamental structure of the software. This is typically defined for an application (or a version of an application). Most applications that are used to store and deliver large amounts of data have one of the following architectures: Thin Client, Thick Client or Smart Client. As you begin to look at sourcing the right solution for your business, it’s important that you understand the pros and cons of each architecture in order to make an informed decision.
A Thin Client communicates with a central server which does the work in processing information. This type of architecture can also be synonymous with remote access or desktop virtualisation.
Lightweight and low cost. Easy to deploy. Critical data is not stored locally on the client. Reduced security threat.
High performing server required. The host server becomes a single point of failure – e.g. if the host server crashes, users may lose their session.
A thick client is also a client-server architecture that provides functionality independent of the server. This means that processing occurs on the end-user PC.
Server is not required to be high-performing as required by a thin-client. Can work offline by storing files locally.
Costly to source hardware for each thick client required in the operational environment. Maintenance required on all