What are Flame Detectors and how do they work?

Working in the fire industry for over five years has taught me the importance of understanding fire detection systems and how they can save lives and property.

Flame detectors play a crucial role in the early detection, response, and suppression of fires. Whether you’re protecting a home, office, or industrial facility, selecting the right flame detector can make all the difference.

What is a Flame Detector?

A flame detector is a sensor designed to detect the presence of fire by analyzing specific light spectrums or heat signatures. These devices are highly responsive and can initiate pre-programmed actions such as:

  • Sounding fire alarms.
  • Alerting central monitoring systems.
  • Activating fire suppression systems.
  • Deactivating gas or fuel lines.

The ability to quickly detect and respond to fires makes flame detectors indispensable in fire safety systems.

Types of Fire Detectors

Fire detectors come in various types, each using different technologies to detect flames. Understanding these options is key to choosing the right one for your needs.

Ultraviolet (UV) Flame Detectors

These detectors operate in the ultraviolet spectrum, identifying the UV radiation emitted by flames. UV flame detectors are highly sensitive and provide fast detection but can be affected by false alarms from UV light sources like welding arcs or sunlight.

Infrared (IR) Flame Detectors

Infrared flame detectors work within the infrared zone of the spectrum, detecting the heat signature of a flame.

They are reliable for indoor applications but may struggle with false positives from hot surfaces or sunlight.

UV/IR Flame Detectors

UV/IR detectors combine ultraviolet and infrared technologies to reduce false alarms while maintaining high sensitivity. They are a popular choice for environments with varying light conditions.

Multi-Spectrum Flame Detectors

These advanced detectors utilize multiple infrared sensors (commonly triple IR) to enhance accuracy and reliability.

They are designed for challenging environments where other detectors may fail, such as oil refineries or chemical plants.

Flame Imaging Detectors

Flame imaging detectors capture visual images of a fire and analyze them to determine the fire type and size. These are often used in specialized applications requiring detailed fire analysis.

How to Select the Right Flame Detector

Choosing the right flame detector depends on several factors. From my experience, these are the top three considerations:

Type of Fire You Need to Detect

Different flame detectors are designed to detect specific types of fires, such as:

  • Hydrocarbon fires (e.g., gasoline, oil, or methane).
  • Non-hydrocarbon fires (e.g., metals or hydrogen).

UV/IR or multi-spectrum detectors are ideal for hydrocarbon fires, while specialized detectors may be needed for unique fire types.

Environmental Conditions

Consider the environment where the detector will be installed:

Indoor vs. outdoor

UV flame detectors might struggle outdoors due to sunlight interference.

Hazardous areas

Multi-spectrum detectors are better suited for high-risk environments like chemical plants.

3. Required Response Time

In high-risk areas, response time can be critical. Technologies like UV/IR or multi-spectrum detectors offer faster response rates, making them suitable for environments with flammable materials.

flame detector

Applications of Flame Detectors

Flame detectors are used across various industries, including:

  • Oil and Gas: Monitoring flammable gas leaks and hydrocarbon fires.
  • Chemical Plants: Detecting fire hazards in hazardous environments.
  • Warehouses: Protecting stored goods from accidental fires.
  • Power Plants: Ensuring safety in high-temperature and fuel-rich environments.

Key Tips for Maintenance

To ensure your flame detector remains effective, follow these maintenance tips:

  1. Test detectors regularly to verify functionality.
  2. Clean sensors to prevent dust or debris from obstructing detection.
  3. Update software or firmware for advanced detectors like flame imaging systems.

FAQ: Flame Detectors

What is the difference between a flame detector and a smoke detector?

A flame detector identifies the presence of fire by analyzing light spectrums or heat signatures, whereas a smoke detector senses smoke particles in the air.

Flame detectors are faster at detecting fires in open areas, while smoke detectors are more suited for detecting smoldering fires indoors.

What type of flame detector is best for outdoor use?

UV/IR flame detectors are ideal for outdoor environments due to their ability to reduce false alarms caused by sunlight. Multi-spectrum detectors are another reliable option for challenging outdoor conditions.

How do I know which flame detector to choose for my application?

Consider three main factors: the type of fire you’re monitoring, the environmental conditions, and the desired response time.

For example, a UV/IR detector may work well in a warehouse, while a multi-spectrum detector is better for a chemical plant.

Are flame detectors suitable for detecting all types of fires?

Not all flame detectors can detect every type of fire. For instance, hydrocarbon fires are best detected by UV/IR or multi-spectrum detectors, while non-hydrocarbon fires (like hydrogen or metal fires) may require specialized technology.

Do flame detectors require regular maintenance?

Yes, regular maintenance is essential. Detectors should be tested and cleaned periodically to ensure accuracy and functionality. Advanced detectors, like flame imaging systems, may require software updates as well.

Can flame detectors prevent fires?

While flame detectors cannot prevent fires, they play a critical role in early detection, allowing for swift action to suppress the fire or evacuate the area.

What is the typical response time for a flame detector?

The response time varies depending on the technology used. UV flame detectors typically respond in milliseconds, while multi-spectrum detectors may take a slightly longer time depending on their configuration.

Are flame detectors affected by false alarms?

Some flame detectors, especially UV or IR types, may be prone to false alarms from sunlight, welding arcs, or other heat sources. UV/IR and multi-spectrum detectors are designed to minimize these issues.

How Does Carbon Dioxide Affect Indoor Air Quality?

Most of us spend most of our time at the office; therefore, maintaining adequate indoor air quality at the workplace is essential. In this article, I will share how carbon dioxide affects indoor air quality.

How does carbon dioxide affect indoor air quality?

There is a direct relationship between the amount of carbon dioxide in the environment and the air quality.

Carbon dioxide (CO₂) builds up in the atmosphere and causes Earth’s temperature to rise, much like a blanket traps heat. This extra trapped heat disrupts many of the interconnected systems in our environment.

In other words, if you want to improve indoor air quality, you need to make sure you control the amount of CO₂ in the environment.

What Carbon Dioxide does to the body

Exposure to carbon dioxide can produce various health effects. These include headaches, dizziness, restlessness, difficulty breathing, sweating, and asphyxia, among others.

Where does carbon dioxide come from?

Carbon dioxide is a natural component of air. The amount of carbon dioxide in a given air sample is between 250 and 400 ppm (parts per million).

Indoor concentrations can go higher than that because human beings inhale oxygen and exhale carbon dioxide; if the ventilation system is not well designed, it may cause an increase in the amount of carbon dioxide indoors.

How much carbon dioxide is too much?

The occupational safety standard for an industrial workplace requires a maximum level of 5000 ppm of carbon dioxide.

While the level of carbon dioxide below 5000 ppm is considered safe, some studies have shown that high levels of carbon dioxide are the direct result of drowsiness, lethargy, and reduced productivity.

What are the safe levels of carbon dioxide in rooms?

250-400 ppm

This is a normal background concentration in outdoor ambient air.

400-1000 ppm

This is the level of concentration typical of occupied indoor spaces with good air exchange. This is the value of concentration you should be aiming for.

1000-2000 ppm

When the level of carbon dioxide reaches this level, you should evacuate the building as soon as possible. At this point, most people will start to complain about drowsiness and poor air.

2000-5000 ppm

Stale and stuffy air, poor concentration, loss of attention, increased heart rate, and slight nausea may be present.

5000 ppm

This is the workplace exposure limit in most countries; the exposure limit is calculated as an 8-hour time-weighted average (TWA).

Above 40,000 ppm

At this point, the exposure may lead to serious oxygen deprivation, resulting in permanent brain damage, coma, and even death. You should make sure that it never gets to this point.

What to do?

My recommendation would be to monitor the air quality at your office; you can do this by having an air quality monitor at your office, and when you see the air quality start to drop, you can take the appropriate measures. I use the Airthings 2930 Wave Plus.

Conclusion

This is how carbon dioxide affects indoor air quality; my recommendation is to maintain it below 1000 ppm. You can do this by improving how the air circulates in your office.

Also, I recommend using an indoor air quality monitor to monitor the quality of air in your office; most of these monitors can be connected to the internet or an app so that they can warn you when the quality of air is getting worse.

Keep working hard and stay safe; thank you for reading.

How To Design A Gas Detection System For Boiler Rooms

We commonly use natural gas for heating in industrial complexes; undetected gas leaks or incomplete combustion could cause an explosive hazard or an influx of carbon monoxide, resulting in loss of life, structural damage, or expensive waste of fuel.

Why do we need a gas detection system for boiler rooms?

The boiler room is not frequently occupied; this may lead to the leak remaining undetected.

A continuous gas monitoring and detection system will provide early warning of a gas leak and prevent loss of life and material.

What gases can be found in boiler rooms?

Natural gas

Natural gas is used in the industry for heating, and undetected leaks can be deadly. Nearly half of the natural gas is methane.

Since natural gas is lighter than air, it will immediately rise to the ceiling or roof space of the boiler room.

Carbon Monoxide

Carbon monoxide is the result of the incomplete burning of hydrocarbon fuels such as wood products, natural gas, fuel oil, and coal.

For this reason, carbon monoxide and natural gas monitoring are essential for gas detection in boiler rooms.

Components of Boiler room gas detection system

The boiler room’s gas detection system consists of sensors that are strategically placed to detect natural gas and carbon monoxide, with a controller that will have relays or that can connect to an external system.

Gas sensors

I recommend selecting catalytic bead sensors for boiler room applications. Catalytic bead sensors are less prone to false alarms than solid-state or semiconductor sensors.

Catalytic bead sensors have a life expectancy of 3 to 5 years, sometimes even more depending on how well you take care of them and environmental factors like temperature and humidity.

Boiler rooms are considered safe areas, i.e., you do not need explosion-proof sensors, but it is recommended to use them, and if possible, use class I Div I sensors.

My recommendation for this would be Sensepoint XCD or E3point, both manufactured by Honeywell.

Location of the sensors

Natural gas is lighter than air, which means the gas will concentrate near the roof, so my recommendation would be to place at least one sensor on the roof (typically one foot from the roof), and the rest of the sensors should be located over potential leak areas.

This includes

  • The gas burner assembly.
  • The gas train assembly.
  • The pressure boosters (if boosted).
  • The gas shut-off valve.
  • The combustion air intake.
  • The gas meter.

Depending on the size of the boiler room, the rule of thumb is to install one sensor for each 25 feet of radius.

The controller

It is recommended to have at least one controller in the boiler room; as its name suggests, the controller will be the main brain of the gas detection system. You can set it up to shut down the valves, activate relays, or activate the horn and strobe.

Here are my recommendations when it comes to selecting a controller for the boiler room gas detection system.

Location of the controller

I recommend having a controller outside the boiler room so that people can see what is going on in the boiler room before they enter it.

Compatible with the sensors

I have seen people buy sensors from one manufacturer and the controller from a different one, or the same manufacturer, but they are incompatible.

Make sure the sensors you have can communicate with the controller; if you have 4-20 mA sensors, you need a controller that can take 4-20 mA input; if the sensors are Modbus, make sure the controller can accept Modbus inputs.

The controller must have relays

Depending on what you want to do, you may need a controller with relays; this can be to shut down a control valve, start or stop a fan, process, etc.

Power Supply

Most controllers run on 24 VDC; make sure that you have the power supply that can help the sensors and the controller.

Visible Display

I recommend a controller that has a visible display so that people can be able to see the reading in real-time.

Integration Options

Depending on whether the boiler room gas detection system is stand-alone or is integrated with a larger system.

If you are going to connect it to a building management system (BMS), you probably need a controller that has BACnet (Building Automation Control Network) protocol as an output.

FAQ: Gas Detection System For Boiler Rooms

What detector do you need for a boiler room?

You need two types of detectors for carbon monoxide and flammable gases (LEL).

How many sensors do I need for a boiler room?

It depends on how many potential leaks there are; I recommend one per potential leak. Make sure the sensors are placed near the potential leak.

Is a carbon monoxide detector required in a boiler room?

Each boiler room containing one or more boilers from which carbon monoxide can be produced shall be equipped with a carbon monoxide detector with a manual reset.

Key takeaways: Gas Detection System For Boiler Rooms

Most industries, including boiler rooms, use natural gas for heating; this poses the danger of explosion due to the natural gas leak, or the unburned gases can turn into carbon monoxide.

To design a gas detection system for boiler rooms, you need to consider sensors that will detect methane (LEL sensors) and carbon monoxide.

I recommend using electrochemical sensors because they have an expected life of 3 to 5 years and produce fewer false alarms.

You need to place the sensors near the position where there is more possibility of a leak and the controller outside the boiler room where it is visible so that people can see the reading before they enter the boiler room.

Bimetallic Strip – Everything You Need To Know

Bimetallic strips are an ingenious invention that harnesses the different expansion rates of two metals to perform a variety of tasks.

At its core, a bimetallic strip is made by bonding two strips of different metals together. These metals expand at different rates when heated, causing the strip to bend.

This simple principle has given rise to numerous practical applications.

The history of bimetallic strips

The story of bimetallic strips begins with John Harrison, an 18th-century clockmaker who revolutionized timekeeping.

By using bimetallic strips in his marine chronometers, Harrison was able to correct the timekeeping errors caused by temperature fluctuations, which was a game-changer for navigation at sea.

Fast forward to today, and bimetallic strips are everywhere. You’ll find them in thermostats, where they help control heating and cooling systems, and in electrical devices, acting as a safeguard against overheating.

In industrial settings, they’re crucial for various sensors and automatic controls, ensuring machines operate smoothly and safely.

The choice of metals is crucial—typically, a high-expansion metal like brass or copper is paired with a low-expansion metal like steel.

This combination creates the desired thermal sensitivity, making the strip bend predictably in response to temperature changes.

In essence, bimetallic strips are a brilliant blend of materials science and thermal engineering.

Their straightforward design and reliable performance make them a cornerstone in both everyday gadgets and sophisticated industrial systems.

What Is A Bimetallic Strip?

A bimetallic strip is a fascinating little device composed of two different types of metals bonded together.

These metals have different coefficients of thermal expansion, meaning they expand and contract at different rates when exposed to temperature changes.

When the temperature changes, one metal expands or contracts more than the other, causing the strip to bend or curve.

This bending action can be used to measure temperature changes or to act as a switch in various applications.

You’ll often find bimetallic strips in thermostats, where they help control heating and cooling systems by responding to temperature changes.

They’re also used in electrical devices as thermal protectors, shutting down circuits when things get too hot.

In industrial settings, they’re crucial components of sensors and control systems, ensuring safe and efficient operation.

In essence, a bimetallic strip is a simple yet incredibly effective way to harness the physical properties of metals for practical applications.

Who Invented The Bimetallic Strip?

The bimetallic strip was invented by John Harrison, an English clockmaker, in the mid-18th century.

Harrison developed the bimetallic strip for his third marine chronometer (H3) in 1759 to compensate for temperature-induced changes in the balance spring.

This invention significantly improved the accuracy of timekeeping, which was crucial for navigation at sea.

How Does a Bimetallic Strip Work?

A bimetallic strip operates on a simple yet effective principle that leverages the differing thermal expansion rates of two metals.

Here’s a detailed explanation of how it works:

Composition

A bimetallic strip is made by bonding two thin strips of different metals together. These metals are chosen because they have distinct coefficients of thermal expansion, meaning they expand and contract at different rates when exposed to temperature changes.

Thermal Expansion

When the temperature changes, each metal expands or contracts by a different amount. If the temperature increases, the metal with the higher coefficient of thermal expansion (let’s call it Metal A) will expand more than the metal with the lower coefficient (Metal B). Conversely, if the temperature decreases, Metal A will contract more than Metal B.

Bending Action

Because Metal A and Metal B are bonded together and can’t move independently, this difference in expansion rates causes the bimetallic strip to bend. When heated, the strip bends towards the metal with the lower coefficient of thermal expansion (Metal B). When cooled, it bends towards the metal with a higher coefficient of thermal expansion (Metal A).

What is a Bimetallic Strip Used For?

Bimetallic strips are incredibly versatile and find application in a wide range of fields due to their ability to convert temperature changes into mechanical movement. Here are some of the primary uses:

Thermostats

One of the most common applications of bimetallic strips is in thermostats. In these devices, the strip bends in response to temperature changes, either closing or opening an electrical circuit.

This action regulates heating and cooling systems in homes, appliances, and industrial equipment, maintaining a desired temperature.

Thermal Switches

In electrical devices, bimetallic strips serve as thermal protectors. When a device overheats, the strip bends, breaking the circuit and preventing further heating. This helps in avoiding damage to the device or potential fire hazards.

Thermometers

Bimetallic strips are used in dial thermometers, where the bending of the strip is converted into a rotary motion that moves a needle across a scale to indicate temperature. These thermometers are simple, durable, and do not require batteries or external power.

Industrial Controls

In industrial settings, bimetallic strips are integral to various sensors and control systems. They help in monitoring and regulating the temperature of machinery and processes, ensuring operational safety and efficiency.

Clocks and Chronometers

The invention of bimetallic strip was invented by John Harrison primarily for use in marine chronometers to compensate for temperature-induced errors in timekeeping.

This application is still relevant in precision instruments where temperature stability is crucial.

Fire Alarms

Some fire alarms use bimetallic strips to detect heat. When a certain temperature is reached, the strip bends and triggers the alarm, alerting occupants to the presence of a fire.

Automotive Applications

Bimetallic strips are used in various automotive components, such as temperature sensors for engine management systems, where they help maintain optimal performance and prevent overheating.

Household Appliances

Common household appliances like irons, ovens, and toasters use bimetallic strips to regulate temperature.

The strip ensures the appliance maintains a consistent temperature, preventing overheating and ensuring safety.

Electrical Overcurrent Protection

In circuit breakers, bimetallic strips are used to detect overcurrent conditions. When excessive current flows through the circuit, the strip heats up, bends, and trips the breaker, cutting off the electrical supply to prevent damage.

What Happens When A Bimetallic Strip Is Heated?

When a bimetallic strip is heated, an interesting process occurs due to the different thermal expansion rates of the two metals bonded together. Here’s what happens:

Differential Expansion

Each metal in the strip has a different coefficient of thermal expansion, meaning it expands at different rates when subjected to heat.

Typically, one metal (let’s call it Metal A) has a higher coefficient of expansion than the other metal (Metal B).

Bending or Curving

As the bimetallic strip is heated, Metal A expands more than Metal B. Since these two metals are rigidly bonded, the difference in expansion rates causes the strip to bend or curve. The strip bends towards the metal with the lower coefficient of thermal expansion (Metal B).

Mechanical Movement

The bending of the strip can be harnessed to perform mechanical work. For example, in a thermostat, the bending action of the strip can open or close an electrical contact, thereby turning heating or cooling systems on or off.

Thermal Sensitivity

The degree of bending is proportional to the temperature change. This property allows the bimetallic strip to be used as a precise temperature-sensitive device in various applications.

Which Is The Principle On Which The Bimetallic Strip Works?

The bimetallic strip operates on the principle of differential thermal expansion. When two metals with different coefficients of thermal expansion are bonded together and subjected to temperature changes, they expand or contract at different rates.

This difference in expansion causes the strip to bend or curve, as one metal expands or contracts more than the other.

This bending motion, which is directly proportional to the temperature change, is harnessed for various practical applications such as temperature measurement and control, acting as a switch in devices like thermostats and thermal protectors.

What Is The Principle Of Bimetallic Expansion?

The principle of bimetallic expansion is based on the concept that different metals expand at different rates when exposed to temperature changes.

When two metals with distinct coefficients of thermal expansion are bonded together into a strip, any temperature change will cause them to expand or contract at different rates.

This differential expansion leads to the bending or curving of the strip because one metal elongates more than the other.

This bending action is utilized in various practical applications, such as in thermostats, thermal switches, and temperature gauges, to measure and respond to temperature changes efficiently.

Which Metal Expands More In A Bimetallic Strip?

In a bimetallic strip, the metal that expands more when heated is the one with the higher coefficient of thermal expansion.

Common examples of such metals include brass and copper, which typically expand more than metals like steel or Invar.

The difference in expansion rates between the two metals is what causes the bimetallic strip to bend or curve when subjected to temperature changes.

Conclusion

Bimetallic strips exemplify the elegant synergy between materials science and thermal engineering.

By leveraging the differing expansion rates of two bonded metals, these strips convert temperature changes into mechanical movement.

This principle of differential thermal expansion has led to numerous practical applications, ranging from household thermostats and appliances to industrial controls and precision instruments.

Bimetallic strips are fundamental components in many devices, ensuring reliable temperature measurement and control.

Their simplicity, reliability, and effectiveness make them a cornerstone of modern technology, continuing to play a vital role in our everyday lives and various industries.

4-20 mA Current Loop

The 4-20 mA current loop remains one of the most dominant types of analog output in the industry today.

In this article I will look at the history of the 4-20 mA loop, why it is widely used in industry automation, and its advantages and disadvantages.

What is a 4-20 mA current loop?

The 4-20 mA current loop especially refers to the wire connecting the sensor to a receiver that receives the 420 mA signal and then returns to the transmitter. 

The history of 4-20 mA current loop

At the beginning of the industry automation, most mechanical devices were controlled by a pneumatic signal; these systems were costly, bulkier, and difficult to repair. The control signal used back then was 3-15 psi.

With the huge development of electronics in the 1950s, electronic devices became cheaper, and eventually, the old pneumatic 3-15 psi systems were replaced by the analog controllers that used the 4-20 mA.

Why 4-20 and why not 0-20 mA?

Now we know that the control signal that was picked was 4-20 mA, the question I often get is why 4- 20 mA and not 0-20 mA? The simple answer is that there was a problem with the dead zero.

What is a dead zero issue?

A dead zero is when you start the lowest signal with 0mA, and the controller will not be able to differentiate if the 0mA is because the sensor detects the lowest signal value or there is an open circuit.

If you have an H2S sensor that detects 0 to 100 ppm, it will show 0 mA when there is 0 ppm of H2S, and it will also show 0 mA when there is an open circuit in the loop. This will have a huge impact on the process control.

How do you solve a dead zero issue?

The solution was simple: start with a number above zero; in the same example, if the sensor reads zero, it will send 4 mA, and if there is an open circuit, it will send a 0 mA signal. The problem is solved.

Why 4 mA?

We said above that to solve the dead zero issue, there was a need to start the value at a value greater than zero, the next question is, why 4ma and not another value? Here is the answer.

Electronic chips require at least 3mA to work

To move from mechanical controllers to electronic ones, electronic chips were introduced. Those chips require a minimum of 3 mA of current to function, so a margin of 4 mA is taken as a reference.

The 20% bias

The original control signal was 3-15 psi; 20% of 15 is 3, and 20% of 20 mA is 4 mA.

Why 20mA?

There are 3 reasons why 20 mA was picked:

The human heart can withstand up to 30 mA.

20 mA is used as the maximum because the human heart can withstand up to 30 mA of current only. so, from a safety point of view, 20 mA is chosen.

1:5 rule

The 4-20 mA was designed to replace the old 3-15 psi, and since most instruments at the time were using this control signal, there was a need to design the new signal that would follow the same pattern.

Lineality 

With the current signal being linear, it is easier to design and implement the control system using the 4-20 mA signal.

Easy to design

Most industrial transmitters are powered with 24 V, and since the signal obeys Ohm’s law, V=IR, it makes it easier to design devices that can be connected to the 4-20 mA loop.

Simple calculations

Having a signal that ranges from 4-20 mA makes it very easy to calculate the expected values. if we have a sensor that detects the 0 to 100 range, here are the estimated current values.

0-4 mA

25-8 mA

50-12 mA

75-16 mA

100-20 mA

It is that simple.

Simple conversion to 1-5V

For other elements of industry automation to interpret the signal, there is a need to convert it to a digital signal.

Most ADCs (Analog-to-Digital Converters) use voltage to convert the signal; by using the precision 250-ohm resistor, it makes it easier to convert the analog signal to a digital one by using Ohm’s law, V=IR.

Types of 4-20 mA current loop

There are 4 types of 4-20 mA current loops, where the two-wire loop version is by far the most common.

There is a three-wire 4-20 mA source, 3-wire 4-20 mA sinks, and four-wire 4-20 mA variants that are similar in their fundamental working principle.

I explain the difference between them in this article here.

Advantages of 4-20 mA current loop

Worldwide industry standard

Since it is easier to implement and design control loops with a 4-20 mA signal, it is widely used in many industrial automation industries.

Easy to connect and configure

The 4-20 mA loop is easy to design, configure, and wire; you do not need a lot of training to wire or configure it; hence, it is used in most applications.

Less sensitive to electronic noise

Electronic noise can affect the information the cables are carrying since the signal is transported as a current, which is less sensitive to electronic noises than voltage.

Fault detection using live zero

Since the signal starts at 4 mA, it is very easy to know if there is a fault in the loop; if we receive 0 mA, we know there is a fault somewhere.

You can use a simple multimeter to detect a fault

Since the loop will carry current, you can measure the current by using a simple $10 multimeter; this will reduce the diagnostic time and fault detection cost.

Disadvantages of the 4-20 loop

There are a few disadvantages to using the 4-20 mA loop; for me, these two are the main ones.

The current may introduce a magnetic field

The current may introduce magnetic fields and crosstalk to the parallel cables; this can be solved by using the twisted wire cable.

One pair of cables can only carry one process

This is huge. When you design a control loop using a 4-20 mA signal, you need to know that one loop can only have one variable, so if you have many loops, you will need more cables, and this will increase the cost of installation and eventually make the fault diagnostic more complicated.

Conclusion

We took a look at the famous 4-20 mA current loop. We looked at the history of the 4-20 mA loop, why it is widely used in industry automation, and its advantages and disadvantages.

If you have anything to add to this or a question, please leave your comment below. Thank you for reading.

What is a Relay?

A relay is one of the most used components in industrial automation and control. In this article, I am going to explain what a relay is, the types of relays, and how to correctly use a relay.

What is a relay?

A relay is an electronically controlled switch. It consists of a set of input terminals for single or multiple control signals and a set of operating contact terminals.

Let’s say you want to turn on a fan when the level of carbon monoxide reaches a certain level; instead of having a carbon monoxide detector and someone to start the fan when it reaches a certain level, most fixed gas detectors will come with a relay, and the relay will turn on or off the fan when the CO levels reach a certain level.

How do relays work?

A relay consists of two parties, the first part is the magnetic coil, a magnetic coil is used to activate the switching action depending if there is electric power or not.

The second part of the relay is the contacts. Contacts, as their name suggests, are used for power connections with the external devices. They are usually normally open, normally closed, and common contacts.

The electromagnet starts energizing when the current flows through the magnetic coil and then intensifies the magnetic field.

The electromagnet becomes connected to the power source through the contacts to the load and a control switch.

The upper contact arm becomes attracted to the lower fixed arm and then closes the contacts, resulting in a short circuit.

The contact then moves in the opposite direction and creates an open circuit once the relay has been de-energized.

The movable armature will return to its initial position when the coil current is off. The force that causes its movement will be almost the same as half the strength of the magnetic force. Spring and gravity provide this force.

Types of relays

There are four types of relays, and each type of relay will require being wired differently. If you do not know what type of relay you have, you can check the connection diagram of the particular relay and figure out which type you have.

SPST – Single Pole, Single Throw

An SPST relay, or single pole single throw relay, is the simplest type of relay out there; it consists of one magnetic coil and one set of contacts. It only connects or disconnects only one contact when it is operated.

SPDT – Single Pole, Double Throw

A Single Pole Double Throw (SPDT) relay is a relay that only has a single magnetic coil and can connect to and switch between 2 contacts.

It is the most used relay type in the industry. It consists of one Coil and 2 sets of Contacts (1 Normally Open & 1 Normally Closed) separated by a Common Contact.

DPST – Double Pole, Single Throw

A DPST or Double Pole Single Throw relay is a type of relay that has two magnetic coils and two Contacts.

Each magnetic coil has one corresponding contact. But both coils are isolated from each other, the same as both contacts are isolated from each other.

It works with two different circuits. It only provides the switching function either on or off. Both the input-output pairs work simultaneously.

DPDT – Double Pole, Double Throw

A Double Pole Double Throw (DPDT) relay is a relay that has 2 magnetic coils and 4 contacts, each magnetic coil has 2 corresponding contacts that it can connect to.

The DPDT Relay represents two sets of SPDT Contacts with the +V connections tied together

Designed for when a single relay is needed to activate/deactivate two external devices, such as a Horn and a Strobe light.

Conclusion

That is it, in this article, we defined what a relay is and the types of different relays that you can find in the industry. thank you for reading.

How to Convert 360 Fahrenheit to Celsius

Converting Fahrenheit to Celsius is one of the most complicated measurement conversions out there.

Today I am going to share with you how to do that, and I am going to provide an example of how to convert 360 Fahrenheit to Celsius.

Why is converting temperature units more complicated?

All measurement units have the same starting point; for example, the distance units cm and meters all start at zero. When you advance, you just add the units you advanced.

The most commonly used temperature units, Celsius, Fahrenheit, and Rankine, do not start at the same point; for example, water freezes at 0°C or at 32°F, so you cannot just do the simple conversion; you will need to run through an equation to get an answer.

The Difference Between Degree Celsius (°C) and Degree Fahrenheit (°F)

A thermometer can help us determine how cold or hot a substance is. Temperature is in most of the world measured and reported in degrees Celsius (°C). In the U.S. it is common to report the temperature in degrees Fahrenheit (°F). In the Celsius and Fahrenheit scales the temperatures where ice melts (water freezes) and water boils are used as reference points.

  • In the Celsius scale, the freezing point of water is defined as 0 °C and the boiling point is defined as 100 °C
  • On the Fahrenheit scale, the water freezes at 32 °F and boils at 212 °F

The Difference Between Degree Celsius (°C) and Degree Fahrenheit (°F)

A thermometer can help us determine how cold or hot a substance is. Temperature is in most of the world measured and reported in degrees Celsius (°C). In the U.S. it is common to report the temperature in degrees Fahrenheit (°F). In the Celsius and Fahrenheit scales the temperatures where ice melts (water freezes) and water boils are used as reference points.

  • In the Celsius scale, the freezing point of water is defined as 0°C, and the boiling point is defined as 100°C.
  • On the Fahrenheit scale, water freezes at 32 °F and boils at 212°F.

How to convert Fahrenheit to Celsius

0 degrees Fahrenheit is equal to -17.77778 degrees Celsius:

0 °F = -17.77778 °C

The temperature T in degrees Celsius (°C) is equal to the temperature T in degrees Fahrenheit (°F) minus 32, times 5/9:

T(°C) = (T(°F) – 32) × 5/9

or

T(°C) = (T(°F) – 32) / (9/5)

or

T(°C) = (T(°F) – 32) / 1.8

360 Fahrenheit to Celsius conversion

How to convert 360 degrees Fahrenheit to Celsius.

The temperature T in degrees Celsius (°C) is equal to the temperature T in degrees Fahrenheit (°F) minus 32, times 5/9:

T(°C) = (T(°F) – 32) × 5/9 = (360°F – 32) × 5/9 = 182.2222°C

So 360 degrees Fahrenheit is equal to 182.2222 degrees Celsius:

360°F = 182.2222°C.

How do you convert C to F without a calculator?

Without a calculator, there are many means to convert Celsius to Fahrenheit. Multiply the Celsius temperature by 1.8 and add 32 to get the Fahrenheit conversion. With this method you get the exact temperature conversion degree.

If I wanted to convert 182.2°C to F, I would take 182.2 x 1.8+32=359.96°F.

What is the difference between 1 degree Celsius and 1 degree Fahrenheit?

On the Celsius scale, there are 100 degrees between the freezing point and the boiling point of water compared to 180 degrees on the Fahrenheit scale. This means that 1 °C = 1.8 °F.

Which is colder C or F?

They are equally cold. It is at -40 that the two scales give the same reading. “The Fahrenheit and Celsius scales converge at −40 degrees (i.e. −40 °F and −40 °C represent the same temperature).

What is the Fahrenheit to Celsius ratio?

To convert temperatures in degrees Celsius to Fahrenheit, multiply by 1.8 (or 9/5) and add 32.

Conclusion

That is it; this is how to convert 360 Fahrenheit to Celsius. I hope it was somehow useful to you. Thank you for reading.

What is the Difference Between Sink and Source?

I get this question a lot: What is the difference between sink and source when it comes to wiring the sensors to the controller? In this article, I am going to explain the difference between the two.

The concept of sink and source

If you have wired a sensor or transmitter to a controller such as PLC, I am sure you heard the terms sinking and sourcing.

What are source and sink?

The concept of sink and source describes a current flow relationship between input and output devices in a control system and their power supply. The two terminologies apply only to DC (Direct Current) logic circuits.

A sinking digital I/O (input/output) provides a grounded connection to the load, whereas a sourcing digital I/O provides a voltage source to the load.

Let’s assume that you want to wire a field device to a controller.

If the current is flowing from the field device to the controller, we say that the field device is sourcing to the controller and the controller is sinking about the field device. and vice versa is true.

One confusion I face when explaining this concept to the customer is that we can be talking about the same thing but with different references, so if someone tells you about sink and source, always ask them what their reference is (if it is a field device or a controller).

The most important point to remember here is that in both cases you have current flowing from one device to another; you just need to figure out in which direction.

How to wire a source sensor to a controller?

The 3-wire 4-20 mA loop uses three wires to connect the field device with the controller; here the signal has its own wire, so you have one wire for the +, one wire for the -, and one wire for the signal.

The two wires (the + and the -) are used to power the field device, while the signal wire is used to carry the field device signal to the controller.

The most important thing to note here is the current move from the field device to the controller.

How to wire a sensor sink to a controller?

This is almost the same as the three-wire source type. The 3-wire 4-20 mA loop uses three wires to connect the field device with the controller; here the signal has its own wire, so you have one wire for the +, one wire for the -, and one wire for the signal.

The two wires (the + and the -) are used to power the field device, while the signal wire is used to carry the field device signal to the controller.

The main difference between the 3-wire sink and 3-wire source is that in the 3-wire sink configuration, the current signal moves from the controller to the field device.

FaQ about Sink and Source

What is the difference between sink and source?

The difference between sink and source is that in a source connection, the current flows from the field device to the controller, and in a sink connection, the current flows from the controller to the field device.

How do I know if my controller is a sink or source?

The easiest way to know if the controller is a sink or source is to check the input card; it should specify that.

If it is not clear, you can read the controller user guide, or you can give a call to the manufacturer, and their tech support should be able to tell you if the controller is a sink or source.

How do I know if my sensor or transmitter is a sink or source?

The easiest way to know if your sensor or transmitter is a sink or source is to check the wiring diagram in the user manual or give a call to the manufacturer of the field device.

In my experience, most field devices (sensors and transmitters) come with deep switches that you can use to change them to be a source, sink, or loop.

Can I wire a sink transmitter to a sink controller?

No, you cannot wire a sink transmitter to a sink controller. The reason for this is that both units will be expecting to draw current from the circuit. This will lead to the wrong signal being sent, and eventually, the units might not power up.

Can I wire a source transmitter to a source controller?

No, you cannot wire a source transmitter to a source controller; both units will be providing current to the system, and this will lead to the wrong reading, and the unit might get damaged. Do not do this.

The best way to wire is to set one unit as a sink and one as a source.

Conclusion

We have analyzed the difference between sink and source when it comes to wiring industrial transmitters to the controllers and also answered some frequently asked questions about the subject.

If something is not clear or you have any further questions, please leave them in the comment section below.

Types of 4-20 mA Current Loop

The 4-20 mA current loop remains one of the most dominant types of analog output in the industry today.

I have been working with wiring industrial transmitters for some time now, and one thing I found out is that most people cannot wire them properly because they fail to distinguish between different types of 4-20 mA current loops.

What are the types of 4-20 mA Current Loop

There are 4 types of mA output signals
– Loop (2-Wire)

– Source (3-Wire)

– Sink (3-Wire)

– Isolated (4-Wire)

Each form uses a different reference path for the creation of mA signals, which is dependent on the controller or receiving device (i.e., PLC) to which each field device is connected.

Loop (2-Wire)

This is one of the most common 4-20 mA forms; you just need two wires for power and communication between the field device and the controller.

The controller provides the power to the loop, and the 4-20 mA signal flows from the field device to the controller through the common.

The main advantage of the 2-wire loop 4-20 mA signal is that it is easier to wire, and it will require two wires; hence, it will lower the installation cost.

The disadvantage of the 2-wire 4-20 mA loop is that it has two wires, so if the signal wire is broken, there will be no power on the field device (they use the same cable for power and signal).

There are 4 types of 4-20 mA current loops, where the two-wire loop version is by far the most common.

Although the wiring can be a little bit different, the working principle is the same; understanding how each one is wired can be fundamental to wiring them.

3-wire 4-20 mA loop (Source)

The 3-wire 4-20 mA loop uses three wires to connect the field device with the controller; here the signal has its own wire, so you have one wire for the +, one wire for the -, and one wire for the signal.

The two wires (the + and the -) are used to power the field device, while the signal wire is used to carry the field device signal to the controller.

The most important thing to note here is the current move from the field device to the controller.

The main advantage of the 3-wire 4-20 mA loop source is that the signal and the power wires are separated, so in case the power wire is disconnected, the field device can still be on.

The main disadvantage of this type of 4-20 mA signal is that it uses 3 wires, so more cable is used for wiring; hence, the cost of installation goes up.

3-wire 4-20 mA loop (Source)

The 3-wire 4-20 mA loop uses three wires to connect the field device with the controller; here the signal has its own wire, so you have one wire for the +, one wire for the -, and one wire for the signal.

The two wires (the + and the -) are used to power the field device, while the signal wire is used to carry the field device signal to the controller.

The most important thing to note here is the current move from the field device to the controller.

The main advantage of the 3-wire 4-20 mA loop source is that the signal and the power wires are separated, so in case the power wire is disconnected, the field device can still be on.

The main disadvantage of this type of 4-20 mA signal is that it uses 3 wires, so more cable is used for wiring; hence, the cost of installation goes up.

3-wire 4-20 mA loop (Sink)

This is almost the same as the three wires source type. The 3-wire 4-20 mA loop uses three wires to connect the field device with the controller; here the signal has its own wire, so you have one wire for the +, one wire for the -, and one wire for the signal.

The two wires (the + and the -) are used to power the field device, while the signal wire is used to carry the field device signal to the controller.

The main difference between the 3-wire sink and 3-wire source is that in the 3-wire sink configuration, the current signal moves from the controller to the field device.

The main advantage of the 3-wire 4-20 mA loop sink is that the signal and the power wires are separated, so in case the power wire is disconnected, the field device can still be on.

The main disadvantage of this type of 4-20 mA signal is that it uses 3 wires, so more cable is used for wiring; hence, the cost of installation goes up.

Isolated (4-Wire)

The four wires 4-20 mA current loop is my least favorite; it works almost like the 2-wire loop, but the main difference is that in 4 wires you need two power sources; in this case, the field device will need its power supply.

The current signal will be flowing from the field device to the controller, and the loop is powered by the controller in a 2-wire form.

The main advantage of the 4-wire 4-20 mA loop is that the field device and the controller use different power sources, so if the controller power source goes offline, the field device will keep working.

The main disadvantage is that you will need two power sources; the power sources are not cheap, and this will increase the cost of installation.

How do you know which type of 4-20 mA loop you need to wire?

All field devices come with user guides, and in each user guide, you should be able to see the wiring diagram.

If in the user manual you cannot figure out which type of 4-20 mA your device or controller has, please contact the manufacturer of your device, and they should be able to tell you how to wire it.

Conclusion: Types of 4-20 mA Current Loop

That is it; those are the types of 4-20 mA current loops. Depending on the type, the flow of current and the wiring can change a little.

If you have one of those and you need some help, please post your question below, and we will get back to you.

What is Modbus, and How does it work?

Modbus is one of the most common communication protocols in industrial automation. In this post, I will share with you what Modbus is, its types, advantages, when to avoid using it, and how to diagnose it.

What is ModBus communication protocol?

Modbus communication protocol is a serial communication protocol developed by Modicon® in 1979 for use with its programmable logic controllers (PLCs).

In simple terms, it is a method used for transmitting information over serial lines between electronic devices, one being the master (the one that initiates the communication) and the other the slave (the one that responds to a communication).

How does Modbus work?

In a few words, this is how the Modbus protocol works. The Modbus protocol exchanges data using a request/response mechanism between a master and a slave.

The master/slave principle is a type of communication protocol in which a device (the master) controls one or more devices (the slaves).

Why is Modbus so popular?

Modbus is popular among engineers and technicians because it is so easy to understand; you do not need to be a programmer to understand it.

I remember when I was providing training to new hires, I would tell them that Modbus RTU is very simple: connect A to A and B to B, and everyone was able to wire it on the first day of class.

Is Modbus dead?

No, Modbus is not dead; this is a myth. It will continue to live on, as there are millions of Modbus devices, and every day many of them are being built and implemented.

Is the Modbus protocol industry-specific?

No, the Modbus protocol is not industry-specific and can be used in different types of industries such as factory automation, building automation, process control, oil & gas, traffic & parking, agriculture & irrigation, water & wastewater, pharmaceutical and medical, material handling, etc.

When should you not use Modbus?

Don’t use Modbus if you have a lot of data to transfer. The packets are limited to around 120 bytes maximum.

Transferring 1K requires almost ten messages. It’s just not efficient for any kind of large data transfer.

What are the advantages of Modbus?

– Longer distances.

– Higher speeds.

– The possibility of multiple devices on a single multi-drop network.

Types of Modbus Communication Protocols

Several versions of the Modbus protocol exist for the serial port and Ethernet and the most common are:

– Modbus RTU

– Modbus ASCII

– Modbus TCP

– Modbus Plus

Modbus RTU (Remote Terminal Unit)

Modbus RTU is the most common implementation available for Modbus, it is used in serial communication and it makes use of a compact, binary representation of the data for protocol communication.

Modbus ASCII (American Standard Code for Information Interchange)

This is the type of Modbus that is used in serial communication and makes use of ASCII characters for protocol communication.

The ASCII format uses a longitudinal redundancy check checksum. Modbus ASCII messages are framed by a leading colon (‘:’) and trailing newline (CR/LF).

Modbus TCP/IP or Modbus TCP

This is the type of Modbus protocol that is used for communications over TCP/IP networks.

The Modbus data is wrapped around TCP/IP internet protocols and then the data is transmitted over standard internet.

Modbus Plus (Modbus+ or MB+)

Modbus Plus is a peer-to-peer protocol that runs at 1 MBS. The Modbus Plus protocol specifies the software layer as well as the hardware layer. This remains proprietary to SCHNEIDER ELECTRIC.

Modbus RTU

This is the most commonly used type of Modbus in industrial automation; let us answer a few questions about this type of Modbus.

What is a Modbus RTU?

Modbus RTU is an open serial protocol derived from the master/slave architecture (now client/server) originally developed by Modicon (now Schneider Electric). It is a widely accepted protocol due to its ease of use and reliability.

How many slaves can be connected in Modbus RTU?

Modbus RTU will support up to 247 slaves from addresses 1 to 247 – address 0 is reserved for broadcast messages.

What is the difference between Modbus RTU and Modbus TCP?

The main difference between MODBUS RTU and MODBUS TCP/IP is that MODBUS TCP/IP runs on an Ethernet physical layer, and Modbus RTU is a serial protocol.

Is Modbus RTU serial?

Yes, Modbus RTU is an open, serial (RS-232/422/485) protocol derived from the Master/Slave architecture.

What is Modbus RTU speed?

The majority of Modbus RTU devices only support speeds up to 38400 bits per second.

Modbus TCP IP

What is Modbus TCP/IP?

Modbus TCP/IP ( is simply the Modbus RTU protocol with a TCP interface that runs on Ethernet.

The Modbus messaging structure is the application protocol that defines the rules for organizing and interpreting the data independent of the data transmission medium.

What is the difference between Ethernet and Modbus TCP/IP?

The main difference between Ethernet and Modbus TCP/IP is that Modbus TCP/IP combines a physical network (Ethernet), with a networking standard (TCP/IP), and a standard method of representing data (Modbus as the application protocol).

Essentially, the Modbus TCP/IP message is simply a Modbus communication encapsulated in an Ethernet TCP/IP wrapper.

How to troubleshoot Modbus communication failure?

Troubleshooting Modbus failure can be the most difficult troubleshooting because it means that no activity is being recognized between the slave and master.

Basic Checks for No-response from slave error:

Check that communication settings parameters are correct

This is the most common error I found in many Modbus communication, you need to set the same baud rate in the master and the slave, also double check if the protocol selected is Modbus ( most field devices can communicate via different communication protocols.

Check that the slave’s address

If you have more than one field device, you need to assign them different addresses, most field devices come with a default address of 1, if you do not change it, you will have a duplicated address problem and this will cause a communication error.

Also, check on the controller side, the number of addresses on the datalogger should be equal to the number of field devices connected.

Check Modbus wiring

Just to be sure check your wiring, make sure there are no loose cables or open circuits, and also make sure that the cable distance is less than 2000 ft ( 660 meters).

Avoid using T-Taps, if you have more than one field device, you must daisy chain them.

Check for reversed polarity on RS485 lines

Wiring Modbus devices is simple; they have two terminals, A and B, just wire A to another A and B to another B. But sometimes manufacturers will use different terminology (some use TX and RX). If uncertain, just try swapping them.

Conclusion

That is it, in this post, we defined what Modbus is and how it works, and we answered a few common questions about the Modbus communication protocol.

If you have questions please feel free to let us know and will answer them as soon as we can.