A new alternative using QFNs with open thermal via-in-pad (VIP) structures reduces cost and eliminates solder wicking.
BTC packages were first offered more than 10 years ago. Since then, use has grown steadily, with a significant increase in demand observed over the past three years. BTCs are most commonly offered as QFNs (FIGURE 1). With many benefits and successful market penetration, most packaging houses now provide QFNs, albeit using different names depending on final package format, including MLF, MLPD, MLPM, MLPQ, VQFN, and DFN.
ibm1
FIGURE 1. Standard QFN device bottom view.
QFN packages are used to meet a variety of voltage/power regulation, logic controller, and clocking needs. The small form factor is attractive to designers looking to increase functionality using less PCB real estate. A good example is shown in FIGURE 2. Many voltage regulator designs have migrated from using a daughtercard sub-component soldered to the PCB to performing regulation directly on the main card assembly with the use of a QFN, also known as down-regulation or “down-reg.” Benefits of moving to this new layout using a QFN package within the circuit include using less PCB real estate, equivalent or increased regulation function, and simplified assembly and rework processes.
ibm2
FIGURE 2. Voltage regulator package migration.
Regardless of application, new QFN packages need to be designed with thermal and power dissipation requirements in mind. Overpowering or overheating a device can lead to internal package failure or downstream device errors.
To help ensure adequate thermal and power dissipation, QFNs are designed to be soldered to a thermal pad located under the device connecting the PCB and component exposed die paddle (Figure 1). Within the thermal pad area, thermal vias are connected to sub-surface power/ground layers completing the heat/power sink structure for the device. During operation, the QFN generates heat. FIGURE 3 shows various heat flux paths, including radiation from the package body, conduction through the bulk PCB, and conduction through thermal vias. By design, the majority of heat is intended to be transferred using via-in-pad (VIP) structures.
ibm3
FIGURE 3. Heat transfer using PCB thermal pad.1
The resulting thermal VIP structure is very efficient. It provides greater design flexibility and low thermal resistance in a standard size device package. Superior electrical and SI characteristics are obtained due to minimal lead lengths, reducing electrical path distances between the silicon die and PCB.
While QFNs have been incorporated into consumer and mobile electronics for some time, these packages are now making their way into enterprise server and storage products. Original QFN VIP design points suitable for consumer grade electronics may not be suitable for higher-complexity, high-reliability products.
PCB design guidance for QFN thermal pads, thermal vias and I/O can be found in IPC-7093, Design and Assembly Process Implementation for Bottom Termination Components, issued March 2011.2 Component supplier guidance is also available to help ensure designs meet supplier requirements for optimal package operation.1,3,4,5,6,7,8,9,10,11
Upon review of the design guidance documentation1-11, two primary design points dominate. QFN thermal pads are recommended to have thermal vias either filled or left as open copper through-hole vias. Filled vias are achieved either by via tenting or using VIPPO technology. Via tenting covers thermal through-hole vias using conventional solder mask (top or bottom). Via-in-pad plated over (VIPPO) technology fills thermal vias with conductive epoxy, then caps them with copper (FIGURE 4).
ibm4
FIGURE 4. VIPPO basic structure.12
While via tenting and VIPPO techniques may be suitable for some applications13,14,15,16, they are not necessarily acceptable for others. Tenting vias with solder mask has been known to increase the risk of long-term PCB reliability issues such as insulation resistance failure, plus other mechanisms. Additionally, VIPPO can add 15 to 20% to the PCB fabrication cost, and suppliers of high-quality PCBs are limited. VIPPO application is also limited by PCB thickness, generally ranging from 0.040" to 0.110" stackups. VIPPO use beyond this range has not been widely assessed for long-term reliability.
Open copper through-hole thermal vias are unprotected solderable vias found within the thermal pad area under the QFN (FIGURE 5). In this case, solder paste deposits are printed on the PCB thermal pad, avoiding through-hole via locations. Upon reflow, printed deposits will flow, outgas and connect with other nearby deposits. The intent with this design point is to create a PCB thermal pad structure that is completely soldered to the exposed QFN die paddle, maximizing the thermal connection area under the package.
ibm5
FIGURE 5. Open thermal through-hole via structure.1,12
In practice, numerous issues have been identified using this approach, including:
  • Solder wicking down thermal vias
  • Component standoff variation (tilting, floating)
  • Increased thermal pad voiding levels
  • Backside solder protrusions exiting thermal vias.
If thermal vias fill with solder during reflow, this can limit QFN population to a single-sided printed PCBA, increase rework difficulty, promote additional operator touch-up operations, and can leave dangerous solder shards that may become a shorting risk.
An alternate thermal pad design option is discussed focusing on next-generation QFNs with package body sizes ranging from 1 to 4mm2. The new design point is independent of board thickness and can be used on PCB stackups ranging from 0.040" to 0.250".
The work outlined here includes component symbol changes to 124 unique part numbers, nearly 300 placements, and multiple QFN component suppliers across multiple server and storage class hardware systems.
Intent and Objectives
The intent of this work is to offer a new design point option using QFN packages beyond what is currently recommended by IPC-70932 and component supplier guidelines.1,3,4,5,6,7,8,9,10,11
There were five objectives:
  1. Provide new design guidelines for an alternate QFN VIP option using standard PCB through-hole via and solder mask technologies in combination with conventional SMT solder stencil technology.
  2. Enable a solution that can be used across a wide variety of BTC package types, PCB stackups, and assembly/rework process windows.
  3. Enable high-quality and -reliability QFN performance integrating assembly, thermal, power and signal integrity specification requirements.
  4. Enable a repeatable automated hot gas rework process, minimizing the need for subsequent operator touch-up using additional flux and hand soldering iron.
  5. Provide a cost-effective alternative to VIPPO thermal via design points.
As described by T. Adams et al17 and IPC-7093,2 there are a number of parameter inputs to consider when optimizing a QFN design:
  • Thermal pad and I/O dimensions
  • Thermal vias (quantity, size, pitch, location, and type)
  • Solder mask coverage (thermal pad and I/O)
  • SMT solder stencil apertures (A/R and solder volume).
Monitored output responses include:
  • Component standoff (reliability)
  • Thermal pad % coverage (thermal/power dissipation)
  • Solder voiding levels (thermal pad and I/O)
  • Solder wicking down thermal vias
  • I/O opens/shorts.
The challenge with QFN printed circuit design is balancing assembly/rework, power, thermal and SI requirements to sufficiently dissipate heat and electrical current while ensuring the device is easily manufacturable.
The first step in any design is to review and understand the component supplier’s specifications and guidelines for a particular device. Requirements for via quantity, thermal duty, operational power, current requirements and signal integrity will be specified.
To date, five primary design options have been recommended by IPC-7093 and component supplier guides. FIGURE 6 illustrates each option using a nine thermal via layout. TABLE 1 lists associated pros/cons for each option. In addition, a new option (#6) is provided resulting from the work within this study. This new option is considered a progression of learning, utilizing the best practices used across options 1-5, and extending concepts for next-generation designs, where scale continues to shrink.
ibm6
FIGURE 6. QFN thermal via design options.
TABLE 1. Thermal Via Option Pros and Cons
ibmTable1
Current Industry Practice
Design options 1, 2 and 4 shown in TABLE 1 have been widely implemented2,13,14,15,17,19,20,21 over the past 10 years with varying degrees of success.
With regard to Options 2 and 4 above, IPC-70932 recommends plugging thermal vias (Section 6.1.3.5): “It is important to plug the via to avoid any solder wicking inside the via during the soldering process.” For enterprise server and storage applications targeted within the scope of this study, via plugging in the form of solder mask tenting is not permitted. Concerns with long-term PCB reliability remain an issue. VIPPO-based designs, while helpful in eliminating solder wicking down vias and enabling larger process windows, are expensive and are not fully tested on thick PCB stackups >0.160". Questions with long-term VIPPO barrel pad stack reliability remain.
Open copper thermal pad/via designs (Option 1) are most commonly used, and were the starting point for this study. Solder wicking variability (FIGURE 7) was shown to be the most significant issue using this design point.
ibm7
FIGURE 7. Thermal via solder wicking variability.1
In some cases devices soldered to open copper thermal via structures worked very well, with minimal solder wicking, low thermal pad voiding, and minimal back-side via solder protrusions. In other cases devices soldered using the exact same approach were not acceptable. Reducing part-to-part variability across a variety of PCB stackups by controlling the design point was the key lesson learned during early trials using open copper thermal pads.
Several key observations using this approach were noted during early study and are described below. FIGURE 8 shows a sample five via open copper thermal pad/via structure and associated SMT solder stencil print layout that was evaluated. The first area of concern noted was solder intended to connect the PCB thermal pad, and device die paddle was wicking down thermal vias (FIGURE 9). Robbing the thermal pad of solder can lead to intermittent grounding and device failure, increased voiding, lower effective % coverage, and lower standoff/reliability for the device.
ibm8
FIGURE 8. Starting design point: open Cu thermal pad.
ibm9
FIGURE 9. Solder in thermal vias.
Depending on the PCB stackup thickness, back-side solder protrusions were observed (FIGURE 10). The protrusions were found to be a function of the thermal pad size and PCB stackup. The larger the thermal pad and thinner the PCB, the more protrusions observed. Such protrusions can lead to back-side assembly/rework issues, signal shorting, power/ground shorting, and can introduce conductive solder shards to the system, should a shard break free from a via annular ring.
ibm10
FIGURE 10. Solder protrusions.
Inadequate thermal via quantity was another significant observation with early design reviews. If there are not enough vias included within the thermal pad area (FIGURE 11), heat transfer into sub-surface ground layers will be limited and may result in the device overheating (and possibly failing) during operation. Low via counts can increase electrical impedance to ground, affecting device power and SI performance.
ibm11
FIGURE 11. Inadequate via quantity.
IPC-7093 (FIGURE 12) includes guidance on the number of thermal vias to include within a thermal pad to sufficiently transfer heat from a device into the PCB. The figure summarizes work completed using 12mil FHS (finished hole size) thermal vias, a variety of via patterns, using a large 9 x 9mm package – examining via counts and effective heat transfer. All vias are connected directly to ground/power planes with no thermal relief structures present.
ibm12
FIGURE 12. IPC-7093-6-12 via count thermal effects.2
As the results from the legacy March 2011 study show, use of nine thermal vias offered optimal heat transfer efficiency. While heat transfer improvements may only be marginal when adding more than nine vias, keep in mind additional vias may be required for other reasons, including power dissipation and SI needs.
The next observation relates to I/O pins on QFN devices. Voltage regulation is a common application for this device type. Therefore, circuit designs often integrate surface power and ground shapes as shown in FIGURE 13. If the component symbol is not designed to include solder-mask-defined (SMD) I/O pins, then ganged opening areas will occur as denoted by the arrows in Figure 13. Since there is no solder mask in these ganged area openings, adjacent I/O solder joints have been shown to flow and bridge together during reflow. Although this has minimal power/ground electrical impact, this is considered an IPC-61023 defect per section 5.2.7.2. These bridged solder joints can in turn reduce overall second-level interconnect reliability. It is therefore recommended SMD I/O copper pad geometries be used.
ibm13
FIGURE 13. Ganged solder mask I/O openings.
The fifth observation relates to thermal via placement. FIGURE 14 shows an example layout using 10 vias. The image on the left shows top side thermal via placement only. At first glance all locations appear random. When copper etch layers are revealed (right image), however, it can be seen that six of the vias were located for close proximity wiring of nearby I/O grounding pins. The remaining four vias were not wire-routed and were connected only to sub-surface power/ground layers. They were placed in random locations within the thermal pad area.
ibm14
FIGURE 14. Random via locations.
Random via placement, as illustrated, relies on via plugging methods (tenting or VIPPO). If either of these methods is unacceptable for a particular application, as is the case within this work, then solder stencil aperture design is made much more difficult. Balancing thermal pad % coverage and minimum stencil aperture ratios, while avoiding open thermal vias, is extremely challenging and may not be possible in some cases. Non-symmetric solder deposits on the thermal pad can lead to other problems as well. Components have been observed to skew, float, and/or tilt, causing I/O shorts and opens, reducing overall first pass assembly yields. It is therefore recommended thermal via placement follow standard x/y grid arrays and avoid random placement.
Alternate SMT stencil printing methods also have been observed. “Zebra printing” patterns as shown in FIGURE 15 have been used with the intent to avoid printing solder down vias, permit proper outgassing channels to reduce voiding, and ensure adequate thermal pad solder coverage.
ibm15
FIGURE 15. “Zebra printing.”
Unfortunately, resulting constructions using this approach have not been very effective and not well controlled. Minimum % solder coverage violations (<50%), increased stencil aperture clogging/reduced throughput, and via solder wicking have all been reported. As a result, alternate stencil printing methods such as Zebra printing patterns are not recommended to help reduce part-to-part thermal pad print variability.
New Design Point Considerations
Based on extensive learning using open copper thermal pad/via constructions, a new design point option was developed with the intent of improving/eliminating as many highlighted issues from earlier study. The new approach incorporates solder-mask-defined windows within the thermal pad area and I/O leads as illustrated in FIGURE 16.
ibm16
FIGURE 16. SMD window design option.
There were multiple goals with the new approach, including:
  • Utilize low-cost open through-hole via structures
  • Eliminate solder wicking down thermal vias
  • Ensure proper via counts to manage heat/power
  • Maximize thermal pad % coverage with solder
  • Reduce standoff variability; improving reliability22
  • Provide proper ground return paths, ensuring long-term electrically stable system operation
  • Enable safe, repeatable rework process windows.
Design rules shown within TABLE 2 apply to BTCs in the form of QFN, MLF, MLPD, MLPM, MLPQ, VQFN, and DFN, as well as FETs and MOSFETs. Guidance is provided to enable lead-free RoHS-compliant constructions, offering high-quality, high-reliability enterprise server and storage class products.
TABLE 2. SMD Window Design Parameter Ranges
ibmTable2
As described in Table 1, numerous benefits are associated with this approach. Combining conventional through-hole vias with custom solder mask windows within the thermal pad area is the essence of the design. This simple approach not only has technical benefits, but commercial procurement benefits as well. Integrating qualified through-hole via and solder mask technologies enables more PCB suppliers to fabricate cards with this design, which in turn helps spread demand over a wider supply base, and helps lower the overall cost of the solution.
The use of SMD windows helps reduce part-to-part variation in multiple ways. Since solder cannot travel down vias, thermal pad standoff (post-reflow) is more consistent; voiding levels are reduced since solder is not being robbed from the thermal pad, and solder-mask-defined out-gassing channels are embedded. Symmetrical solder pad print layouts minimize component tilting or skewing during reflow, reducing I/O shorts and opens risks. The result is more effective thermal and power management of the device, with a high level of thermo-mechanical reliability.
Another benefit to this approach is the design’s wide application window. Using 8/10/12mil FHS through-hole vias with solder mask can enable SMD window designs spanning PCB thicknesses of 0.040" to 0.250". Via plugging options are aspect-ratio-dependent and cannot offer this range.
The design also enables a safe and high-quality rework solution. With solder not able to travel down vias, component removal and site redress operations are simplified. Risk of backside solder protrusions during defective part removal is eliminated, reducing the need for dangerous operator hand-iron touch-up actions. Use of standard 6mil solder mask webs helps ensure solder mask peeling does not occur during site redress.
Results and Discussion
Common SMD window layouts are generally defined as having via counts ≤9. Examples of some common layouts are shown in FIGURE 17.
ibm17
FIGURE 17. Common SMD window layouts (not to scale).
The same SMD window approach can be used for more complex layouts as well. These are defined as via counts ranging from five to 31 (or greater) within the thermal pad area. Examples of some more complex layouts are shown in FIGURE 18.
ibm18A
ibm18B
FIGURE 18. Complex SMD window layouts (not to scale).
As with any design point key parameters must be well understood and controlled to enable ease of manufacturing/reworkability and achieve high-quality/reliable device operation over the life the system. As such, the following sections discuss key challenges to manage when implementing new SMD window designs.
Qualified via sizes and minimum pitch. With the migration to elevated lead-free processing windows, it is critical qualified laminate materials be used in combination with qualified via sizes and pitches. Design layouts will change depending on the application PCB stackup, determining what via/pitch options can be safely and effectively used.
SMD window layouts can be designed using 8mil FHS vias on 0.8mm (32mil) via pitch and 6mil solder mask webs for card stackups ranging from 0.040" to 0.160". If thicker PCB applications are necessary (0.160" to 0.250"), the use of 10mil or 12mil FHS vias may be required.
Be aware that using orthogonal via arrays introduces signal wiring limitations. If nearby wiring densities are high, routing sub-surface under BTC thermal pad via arrays will be very limited. Majority routing will need to occur outside the device keep-out area. Wireability is yet another factor to consider when selecting via counts during the design phase.
It is important to balance thermal via quantities required by the supplier with thermal pad solder % coverage, while ensuring solder does not wick down vias. As thermal via counts increase and thermal pad areas decrease, this reduces the available % solder coverage connecting the device to the PCB. Stencil aperture ratios (A/Rs) must closely be monitored to ensure consistent solder deposits are printed. In some cases where the via quantity cannot accommodate minimum % coverage requirements within the thermal pad, some vias may need to be placed outside the component outline as “outriggers,” shown in FIGURE 19. Although this may not be the best thermal solution (increased conduction path), vias may still be required for power and SI management reasons.
ibm19
FIGURE 19. “Outrigger” thermal vias.
Solder mask alterations. A key learning from implementation efforts is to ensure PCB suppliers are not modifying solder mask web designs per the original design file to accommodate internal process capabilities. IPC-70932 permits use of 3mil solder mask webs; however, not all PCB fabricators have this capability. The use of 6mil solder mask web used in this design point is considered standard technology; there is no reason for additional modifications to be made by PCB suppliers. FIGURE 20 shows an open copper thermal pad structure resulting from solder mask removal by the supplier. It is therefore important to review and verify actual PCB constructions produced by the PCB supplier to ensure desired thermal pad structures are formed.
ibm20
FIGURE 20. PCB supplier removal of solder mask web.
SMT stencil alterations. PCB copper shapes should be designed with stencil aperture ratios (AR) in mind as outlined in Table 2. SMT Stencil aperture openings should be designed 1:1 with copper thermal pad shapes and I/O pads. FIGURE 21 shows an example of where significant SMT stencil aperture modifications were made by a contract manufacturer. Resulting thermal pad print deposits do not match copper windows. Low % coverage, solder via wicking, signal opens, and increased risk of intermittent ground failures have been reported with such drastic modifications.
ibm21
FIGURE 21. Solder deposits not 1:1 with Cu pad.
In some cases, aperture reductions from 1:1 Cu geometries may be required by the assembler to help with printing registration, solder slump, or other assembly line specific needs. As shown in FIGURE 22, such aperture reductions are acceptable to meet manufacturer capabilities, but need to be optimized and verified accordingly. Gray areas shown in the figure are solder print deposit areas; orange areas are PCB copper pad areas.
ibm22
FIGURE 22. 1:1 stencil aperture reductions.
During early manufacturing stages, it is therefore important to verify that SMT stencil design windows match PCB copper thermal pads and I/O. If 1:1 reductions are required by the assembler, optimization and verification are required.
Thermal pad voiding. IPC-70932 Section 6.1.5.3 gives the following guidance for voiding within thermal pads: “The presence of small voids in the thermal pad region is not likely to result in degradation of thermal and electrical performance, nor impact the reliability of perimeter I/O solder joints.” Based on this and published component supplier guidance, the goal with the SMD window design was to minimize large coalesced voids and target 30% maximum voiding as measured by cross-sectional area. Note the current limit set by IPC-7093 is 50%.
As reported in numerous studies14,15,16,17,24,25,26, several factors affect voiding, including the number of thermal vias, size of thermal pads, and outgassing channel allowance. The SMD window approach helps minimize voiding levels by incorporating solder-mask-defined outgassing channels, combining numerous small thermal pads instead of one large opening, and does not permit solder to be printed down vias. These features work together to manage voiding to within acceptable levels. The design also helps minimize violent outgassing that can lead to excessive solder balling, leaving solder shards behind, increasing shorting risks. FIGURE 23 shows sample voiding levels when using this approach. The figure shows voiding levels are within acceptable limits, but further optimization may be required to reduce levels even further. Reduction of voids helps increase effective % coverage of the thermal pad connection and should be a continued focus item to refine the SMD window approach.
ibm23
FIGURE 23. Thermal pad voiding examples.
Implementation status. To date, the SMD window design approach has been applied to over 124 unique physical symbols, affecting over 296 unique part number placements spanning a wide variety of enterprise server and storage systems. Component function includes power regulation, logic controller, and clocking devices.
FIGURE 24 shows the pareto distribution of usage for all part numbers implemented. While a few devices are used across multiple card designs, the pareto shows the variety of different components on the market and their niche uses within circuit designs. This clearly illustrates the significant increase in adoption rates of these device types within server and storage class hardware.
ibm24
FIGURE 24. VIP implementation pareto summary.
FIGURE 25 shows resulting via count usage and % coverage obtained on SMD window implemented devices. The majority of vias used across all designs ranged from four to nine, with some applications requiring as high as 31. Thermal pad % coverage for all devices ranged from 63 to 74%.
ibm25
FIGURE 25. Via count ranges and resulting % coverage.
Interconnect reliability testing performance. At the time of publication, reliability testing of the design point is still in progress. Test results obtained to date are encouraging, proving the design point offers a reliable solution over the longer term. Results are shown in TABLE 3.
TABLE 3. SMD Window Reliability Test Results to Date
ibmTable3
Summary
While IPC-7093 and component supplier guidance documents continue to provide valuable guidance on how best to design and manufacture card assemblies using BTCs, there has been significant growth in BTC usage in more complex constructions over the past three years.
A greater variety of device packages are being introduced into higher complexity, high-reliability server and storage class hardware using thermal pad structures. Package sizes continue to shrink; many are now less than 3mm2 (well beyond 7mm2 data found within IPC-7093). Component placement counts are significantly increasing; in some cases 10 to 20X placement density over legacy product designs have been reported. Increased device functionality is continually drawing more power, as well as producing more heat that must be dealt with.
With all of these factors in mind, a new SMD window design option was established. The design builds on best practices from IPC-7093, supplier design guidance documentation, and legacy industry literature. Extension into smaller form factor BTC devices with higher placement densities and greater thermal/power dissipation needs were key drivers for this design point.
Improving ease of manufacturability for primary attach and rework processes was yet another motivator. The majority of industry and supplier guidance continues to focus on primary attachment quality and reliability. The SMD window design addresses both primary attach and rework needs. It enables safe and repeatable rework process capability, with many benefits over other design options.
In summary, an alternate BTC design approach has been implemented. Key benefits include:
  • Device thermal/power dissipation requirements met
  • High quality/device reliability
  • Improved manufacturability
  • Rework consistency and safety
  • Low cost, enabling larger PCB supplier base
0

添加评论

  1. With fast paced technological advancements of the 21st century, the demand for electronics today is higher than ever. The supply now needs to meet the demand without failing to deliver quality. This is why older ways of outsourcing different production materials from various sources is no longer valid in today’s times, as sub contractors are unable to coordinate accordingly, which leads to flailing services and poor customer satisfaction rates.
    This is why; it just makes sense to keep all the business under one roof.
    Opting for an all-in-one-source contract electronic assembly firm will result in better assembled products that are delivered on time and in a cost effective manner. PC board is amongst the most essential components in any electronic device, so it is wise to ensure that your firm is getting duly assembled PC boards at competitive prices. Here are some ways in which an all in one contract is better than the other options available.

    Surface-Mount Board Modifications

    If the manufacturing firm that you have hired makes use of innovative Surface Mount Technologies, then rest assured it will give way to vast improvements in your board design, offeringnoteworthy cost-cuttingoptions. SMT-based parts are much smaller than their alternatives.They’re alsoless expensive. So you can create and assemble a fully functioning device with almost half the cost with SMT based parts. Furthermore, SMT designs allow you to exploit either side of a circuit board as well.

    Component Sourcing Options

    Part production and manufacturing firms that have been in the business for a long timepossess a wide gamut of component manufacturers to make the choice easier for you. More often than not, minor differences in component sources and trivial changes to the build can result in significantly diminished costs per-unit, with little or no decline in net product quality. Therefore, it is wise to search for alternative component sourcing options when you are entering the phase of device assembly.

    The Science behind Prototyping

    If your contract electronic assembly service enables you to view prototypes more quickly and at lower costs as compared to employing a third-party for prototyping, it means that you are saving big on this aspect. It can also create extra feedback loops, fine tuned designs, and a guaranteed final build, based on the pre-selected prototype. This gives you flexibility in design and operation, and you are able to control wider facets of your device assembly.
    Are you looking for an all-In-One Contract Electronic Assembly in China? If yes, then Asia Pacific Circuits is a one-stop shop where you can get all your PCB assembling needs addressed. From quick turn PCB assembly to other related services, we do it all. Get an instant quote by clickinghere and experience the best PCB services in China by reducing your budget costs.
    9

    查看评论

  2. Via-In-Pad (VIP) is rapidly becoming more commonly used in modern printed circuit design due to many considerations, including the need to miniaturize the PCB form factor. This review of via-in-pad technology can help remove some of the mystery of VIP in PCB manufacturing.
    Via-In-Pad Structure Types
    Pad diameter is the major factor in the device footprint that will determine what type of VIP structure is utilized; drilled-and-filled or laser microvia. In order to meet the minimum annular ring requirement of IPCClass 2 or Class 3 there must be sufficient pad size to accommodate the via diameter and allow for manufacturing tolerances.
    Mechanically Drilled/ Epoxy Filled Vias
    The available range of finished hole sizes for epoxy filled vias (mechanically drilled) is a minimum 0.008″ through a maximum of 0.018″. To consider the minimum finished hole size (FHS) of 0.008″ you must consider the pilot drill diameter (drill size before plating). A pilot drill diameter for 0.008″ FHS will be 0.010″, which will now determine the minimum pad size as defined by minimum annular ring.
    Laser Drilled Microvias
    Microvias require as little as 0.002″ annular ring (laser via diameter + 0.004″). Laser microvias have the advantage of not only being smaller in diameter than mechanical drills (0.003″ to 0.006″ typical diameter for PCB designs), but they have the ability to register much better as the process assures alignment to the sub-layer and the overall hole pattern will scale to match the sub-layer image in X and Y dimensions.
    BGA Requirements
    With BGAs, footprints with 0.5mm and less require microvias as the pad diameter is not large enough to accommodate mechanical drills. Microvias most commonly span a single dielectric thickness–ideally one half deep as the diameter (0.5:1 aspect ratio) with an a maximum depth equal to the diameter (1:1 aspect ratio). This is due to the fact that the fully copper plating process takes a substantial amount of time and is not designed to fill deep, blind holes that extend down deep into the board.
    Do you have questions about VIP? Learn more here in our full discussion of the topic. You may also wish to discover more about this and Advanced Circuits’ full range of PCB manufacturing and assembly capabilities. For over 25 years, Advanced Circuits has provided its customers in the high tech aerospace, military (DOD contracts ready), medical, and commercial industries with printed circuit boards that deliver powerful performance, reliability, and precision for critical applications. Advanced Circuits’ over 10,000 customers rely on the highest quality standards they receive for all PCBs, from simple prototypes to complex designs requiring microvias and machining.
    1

    查看评论

  3. Are you thinking of using a rigid/flex format for a PCB? If so, the first and most obvious answer to the question ‘why?’ is to make use of the flexibility and to permit movement between two conventional boards – even if the movement is limited to providing vibration tolerance.
    Flex circuits that actually flex in regular use can, however, be considered a special case; very often, the intent is to provide an efficient interconnect between two boards that do not lie in the same plane. In such cases, the flexible part may only flex during final assembly of the product and never do so again.
    Flex-rigid can be viewed as an alternative interconnect strategy or as cable replacement. One flex section replaces, at a minimum, a cable plus two connectors – which may be worthwhile for bill of materials reasons – and frees the volume the connectors would have occupied. This is significant for such products as wearables, where every mm3 within the enclosure is used, and in systems where two or more PCBs must be folded into place in final assembly. PCB fabricators report that products in which space and weight are at a premium are amongst the fastest growing adopters of flex-rigid technology.
    Other benefits include: improved reliability, with fewer connectors and associated solder joints; and control of the signal path between the circuitry on the boards at either end of the flex.
    Today, the design process is assisted by 3D capable PCB design software; if you are trying to make maximum use of space, you need to ensure there is no interference between component profiles. Increasingly, design software can also model the flexed state of the non rigid part to establish parameters such as bend radius – and do so dynamically, to check the entire path of system in which the two rigid elements move, in normal use.
    Mature technology
    Flex-rigid fabrication is a mature technology, with well developed rules for success. The first is to establish and maintain a dialogue with the board fabricator and to ensure that any modifications to design rules or design checks (DRC) are imported into the layout package in use and adhered to.
    Many of the guidelines for a successful outcome of a flex-rigid design flow from the properties of the materials being employed. The base material for the rigid portion is most likely to be FR4, while the flex is typically a polyimide film (often called Kapton, after DuPont’s material) with copper foil applied and a coverlay film imposed in place of the solder mask.
    Some thought is needed about how the material properties match the task and this quickly shows where care should be taken. The copper-to-film bond is a junction between dissimilar materials; the tighter the bend radius, the greater the stresses at the boundary and the higher the risk of delamination. The tracks, although thin, are copper and if the bend radius is too small, repeated flexing carries the risk of stress fracture.
    The transition between rigid and flex is also an important area; as with cable termination, bending forces should not be imposed at the transition as this can create very small bend radii over a small distance. Again, dynamic modelling of folding in 3D is helpful and the composition of the layer stack-up is key for reliability and manufacturability.
    Many common errors can be avoided by attention to a few key points in the design process. One area is the need for a larger copper ring around any hole drilled in the flex and, as a related point, the need to leave a larger separation between drill hole/pad and adjacent tracks. The flexible film is just that and the effect of that flexibility is to increase normal (inescapable) tolerances. Therefore, if a drilled hole misses concentricity with its pad by a small margin, the hole will still plate and connectivity through the via will be unaffected – but there is a risk of a short to a track passing too close.
    Minimising vias in the flex region is desirable in itself; the cost of vias is higher than in the rigid, assuming the design requires a double sided flex. This is one area where the configuration of the stack-up that makes up the complete PCB is critical. The manufacturing process is in part subtractive – there will be layers over and under the flex in fabrication that are removed selectively to release the flex segment before the process is complete. The stack-up should be configured to ensure the polyimide layer is fully supported through any drilling. Visualising a drill bit attempting to penetrate a thin, deformable film will immediately show why.
    Transition
    Merely taking vias in the flexible area back into the rigid portion – the flex layer runs throughout the total area of the PCB and provides flexibility only where it is exposed – is not sufficient. For manufacturing reasons, the transition from flex to rigid is not at the same point throughout all layers, so any (possibly buried) vias in the flex layer must step back into the rigid area by a larger margin than might be obvious. Where tracks join circular pads in the copper of the flex layer, teardrops should always be applied; the gradual transition from linear to circular eliminates a possible stress point at the junction.
    Use of copper on the flexible layer or layers also adds to the list of ‘special rules’ and visualising what is happening when the flexible segment is called into action will show why.
    A common error is to have too much copper on the flexible layer – large areas of metal foil bonded to the polyimide will inhibit flexing. If continuous copper is needed, then an appropriate level of cross-hatching will be required. For similar reasons, tracks should be routed perpendicularly to the bend line. All copper should be derived the original foil as plating on the flexible surface tends to produce crystalline metal that is too brittle to guarantee long life.
    A variant on the rigid/flex formula is the application of stiffening to a portion of the flexible layer. In effect, this delineates a region of the assembly that is rigid, but which does not carry components. Stiffeners can provide support during assembly and can constrain the shape of the flexed circuit or they can provide structure and minimum thickness where the flex layer terminates by entering a zero insertion force connector. Although only providing mechanical support, the same set of rules applies to avoid creating problem areas.
    Those designing flexible PCBs have often found value in making paper models of their boards – ‘paper dolls’. Today, 3D representation of the design can provide that insight into real world geometry, while applying the parameters the board fabricator requires for high yield. By drawing accurate component profiles from its database, engineers can be certain from the outset that the flexed and folded product will fit together as intended, with no interference.
    Author profile:
    Robert Huxel is technical marketing manager, EMEA, Altium
    2

    查看评论

  4. The Software Lifecycle Landscape
    With the development of embedded hardware, careful attention is given to the design and creation of highly detailed specifications that can be used to source board components. This is usually followed by a phased delivery set of milestones including: prototype, implementation, test supplier qualification and final release to production. This regimen offers the advantage of ensuring quality and functional requirements being met by the time manufacturing. Taking a waterfall approach to hardware development can minimize any risk of downstream issues once hardware goes into volume production.
    Software development is perceived to be completely different. The “soft” part in software implies there is an inherent ability to change along the way. Unfortunately, this can give management a false impression that software projects can easily accommodate change with little to no additional cost or schedule impact. The perception of a software developer is further glamourized in Dilbert cartoons by Scott Adams. [1] A programmer is considered to be a breed apart—working in a world combining art, technical inquisitiveness, social discomfort and sometimes even magic. All you need is creative talent, endless hours, lots of coffee, a PC and a handful of software tools.
    As hardware development projects are perceived to complete on a timely basis, software development projects are often late and cost much more than was originally planned. Just ask any project manager in the embedded industry who has worked on both hardware and software projects. Given the choice, many project managers would rather manage hardware projects.
    Since both software and hardware projects may be under development at the same time, both sets of teams must collaborate. As hardware developers tend to work in a very structured and organized manner, software engineers tend to perform a lot more trial and error. This “software way of life” can be quite a challenge for management and I’ve seen more than one attempt in my career to convert more-disciplined hardware engineers to become programmers. This rarely works due to different skill sets and a radically different mindset.
    Rather than demand that software people conform to hardware design approaches, a better way is to take lessons from enterprise and mobile software computing. We must identify the right approach for software development that marries the way software developers like to work with how work is performed.
    Audit Your Software Development Lifecycle
    Years back, my Datalight software development team was having a tough time juggling incoming product management requests making project schedules difficult to achieve. At the time, any mention of changing our process methodology to agile (specifically, Scrum) was met with, “no way! That stuff is just a fad.”
    I found it was easier to let the team identify what didn’t work at the time rather than force fit some newfangled agile methodology on them. So, I asked the team, “OK, what can we improve with our software development process?” Through rather intense brainstorming, we were able to compile a list of improvements to be made:
    1. Decisions made throughout a project lifecycle appear ad hoc and reactive.
    2. QA gets involved way too late.
    3. By the time the customer sees the product, it isn’t what they originally expected.
    4. Some features being developed aren’t getting completed on time. We usually find this out way too late to course correct.
    5. Handoffs at key milestones are ill-defined and by the time the project is completed, the original specifications and design documents aren’t even close to what was actually implemented. And yet, we never have time to update them.
    6. Requirements are often too abstract with little to no mention of priority or guidance for validation.
    Comparing Project Management Approaches
    In the software development community, there are two competing schools of thought for managing the process of motivating a team to deliver quality products: waterfall and agile. Figure 1 illustrates the differences between the two extremes with waterfall being more structured and less adaptable, while agile (Scrum and eXtreme Programming shown) is less structured and more adaptable.

    Figure 1:
     Agile versus traditional waterfall (Source: Datalight)
    As mentioned earlier regarding hardware projects, waterfall is a step-by-step, phased delivery approach. Completing one step should advance you to the next step as shown in figure 2.

    Figure 2:
     A waterfall overview (Source: Datalight)
    Figure 2 is not exactly linear—instead the phases overlap between milestones. Developers may start the implementation even before the detailed design document has been signed off. Work may spill over to the next milestone work; hence, the term waterfall is appropriate.
    For software development, a sequence of phased milestones makes logical sense but, in practice, this approach is not as intuitive as you might think. According to Agile & Iterative Development, the waterfall method has become unattractive to software developers because: [2]
    • Users aren’t always sure what they want and once they see the work, they want it changed.
    • Details usually come out during implementation so committing to upfront schedules is not possible.
    • Approved specs are rarely accurate by the time a project is completed.
    • Disconnected long series of steps with handoffs are typically subjective.
    • Success seems far, far away and, in practice, schedules aren’t very predictable. This results in the team not seeing any work completion for possibly months. And then the problem begins when QA gets their hands on the code.
    To compensate for these issues, the Agile Alliance [3] was formed to focus on time-driven, customer efficiency for the software development industry. The Agile Alliance defined core values from W. Edwards Deming’s research on achieving quality and productivity improvements. [4] According to Deming, work can be more efficiently completed with small cross-functional teams focused on immediate quality results.
    Each mini-project takes on work by the team to be completed in fixed-length sprints. At Datalight, I’ve found that two-week sprints work best for major product upgrades and one-week sprints work best for product updates or research projects. Throughout each sprint, the team is focused on planning, doing, studying and assessing (PDSA) activities.
    The software community at large has embraced this approach most notably with Scrum shown in figure 3. Datalight has selected Scrum as our agile methodology to use.

    Figure 3:
     A Scrum process overview (Source: Datalight)
    Work is defined in T-shirt-sized estimates (small, medium, and all the way up to extra large) at the beginning of a project during a Scrum planning kickoff meeting. The team agrees on resources needed, sprint duration, features (called product backlogs) and project vision. And the most important work is taken off the remaining work stack in the first sprint planning meeting. The team breaks the work into tasks and assignments so that by the time the sprint ends, the work should have been assigned, designed, implemented and validated. To mitigate risks, the team meets every day to review what work has been accomplished, what is left to complete, and identify where assistance may be needed. Daily meetings may appear to be overkill, but if a daily 15-minute scrum can alleviate project risks, the time is well worth it.
    This drastically different approach assumes close-knit collaboration and effective communication. Unfortunately, software developers aren’t usually very proactive at communicating and that’s where a ScrumMaster steps in. A ScrumMaster must facilitate open discussion, communication and closure. This can take lots of extra time to prepare and communicate to the appropriate stakeholders.
    A unique characteristic of Scrumming software development projects is that a product owner and the customer should be represented at Scrum planning, sprint planning and sprint reviews. During a sprint review, also called a sprint retrospective, the customer has the opportunity to review progress and the team can adjust to that feedback in the next sprint.
    0

    添加评论

  5. It's never been easier to start a hardware company. The Internet of Things (IoT) phenomenon—touted by many as the future of embedded computing—is now seen as an amalgam of affordable hardware and software platforms. At the same time, however, software complexity makes the IoT design process a classical case of survival of the fittest.
    The IoT design is not a one-size-fits-all premise because the diversity of sensors and connectivity solutions require a new design thinking. Moreover, the IoT projects are typically known to be high-performance, low-cost and low-power, and all of these characteristics are intertwined with the embedded software work one way or the other.
    Take the high-performance and low-cost factors, for instance, which imply that the reuse from the previous projects is limited and software engineering teams won't expand proportionately. So, instead of growing the software team and expanding the project cost, IoT product developers are looking to a new generation of tool-chains that help them accomplish greater software productivity.

    Figure 1: Across-the-board software tool-chain is crucial in the heterogeneous IoT designs. (Source: Atmel)
    The IoT design work is a walk on the tightrope also because there are so many moving parts, and design engineers can't afford to build a subsystem and see later if it works. The start-overs not only add to project cost, they also bring severe time-to-market constraints to the heterogeneous world of IoT design.
    So the IoT design recipe—starting with data capture from sensors to data analytics inside the cloud—requires design validation long before engineers commit silicon to an IoT product. Not surprisingly, therefore, an end-to-end hardware and software platform along with a suite of connectivity solutions is critical in dealing with complexities that come with the challenge of connecting a large number of devices in a seamless manner.
    The article delves into major software challenges in the IoT design realm and shows how the right choice of tool-chains can help deal with the embedded design challenges. It focuses on three key areas of the IoT software ecosystem and offers insight on how to execute the software work in an efficient and cost-effective manner.
    1. Software Complexity
    The hardware-software work allocation generally comes down to a 40:60 split in the embedded design projects. However, in the IoT design projects, there is even a greater tilt toward software ecosystem.
    The IoT developers are migrating from 8-bit and 16-bit microcontrollers to 32-bit devices in order to accomplish higher performance and enhanced functionality in areas such as connectivity, graphic display and cloud computing.
    And that calls for a new software playing field that can efficiently execute communication and cloud computing protocol stacks. Then, there are tasks like the real-time sampling of sensor data, device configuration, security keys, apps, data analytics and more.
    Furthermore, a lot of software in the IoT designs is related to communication stacks like TCP/IP and security libraries such as SSL and TLS; these software components are written to comply with specific standards. These software components have been written before, time and time again, and it makes little sense for an IoT developer on a tight deadline to re-implement them instead of using the existing software.
    In fact, creating this software from scratch risks introducing issues that have already been found and fixed in the existing implementations.

    Figure 2: A complete software ecosystem is vital in confronting the complexity of IoT designs. (Source: Atmel)
    Tips and Tricks:
    • Integrated development environments (IDEs) are the first line of defense in coping with the software complexity that comes with ever more functionality implemented in the IoT applications.
    • When an IoT designer adds services to his application, the dependant software components and drivers are added to the IoT design automatically. For instance, if an embedded developer adds a USB device to his design, ASF will automatically add low-level USB drivers to the design.
    • You can use tools such as Atmel START, an online software configuration and deployment engine capable of speeding up the creation of embedded software even further. It is a web-based tool that allows developers to graphically select and configure software components and automatically integrate them with hardware and middleware resources needed. It’s a web-based tool is completely OS independent; it doesn't require anything to be installed on the user’s PC before use. In addition, the generated projects can be targeted to any embedded IDE, offering unparalleled flexibility.
    2. Code Size and Density
    Another crucial challenge for embedded designers is code size and density that affects both hardware and software efficiency. At one hand, IoT systems require greater intelligence, which leads to more software and algorithms, and on the other hand, IoT solutions need to be low-cost as well as low-power.
    The IoT applications can easily top tens of thousands of lines of code, and that asks for a lot more to be done than simply writing the application code. The rising amount of code means more flash and RAM, which in turn, leads to larger and more expensive chips. That not only adds to the cost of the IoT design but also appends to the power consumption.
    In the IoT design realm, if the speed of execution is a key criterion in managing the software complexity, energy efficiency is closely tied to the use of large program codes. For a start, there is sensor network code that moves the sensor data to the IoT edge node or gateway.
    Then, there is TCP/IP protocol stack for Ethernet controller that often consumes from 50 KB to 100 KB. Likewise, the connectivity links—Bluetooth, Wi-Fi, ZigBee, etc.—come with protocol stacks that comprise of network management, authentication and encryption, and can take twice the memory space compared to the TCP/IP stack.

    Figure 3: Atmel Data Visualizer can identify power spikes caused by the specific parts of a code. (Source: Atmel)
    Tips and Tricks:
    • A new breed of microcontrollers is now equipped with tightly-coupled memory (TCM) features that offer single-cycle access to the CPU and thus boost high-priority latency-critical requests from peripherals. The IoT developers can calibrate the amount of code that requires zero-wait execution performance, so they can dedicate TCM resources to such code segments and data blocks.
    • It's quite difficult to determine which parts of a software program are consuming too much power. However, there are tools like Atmel Power Probe that enable IoT developers to quickly figure out which parts of the code are high on the energy usage.
    • Then, there are tools like Atmel Data Visualizer plug-in that can profile power usage of an IoT application as part of a standard debug session. Live power measurements can be captured during application execution, and power usage can also be correlated to application source code effortlessly. Moreover, by clicking on a power sample, the tool will highlight the code that was executed when the sample was taken, making it very easy to optimize an application for low-power usage. It also provides an oscilloscope view of signals like GPIO and UART.
    • A new array of energy-efficient microcontrollers can now intelligently turn power on and off during the activity and idle periods, respectively, and they draw very little power when asleep. The battery-powered IoT applications can save a lot of power in the always-on sensor operations by allowing the hardware to wake up, do a thing, and go back to sleep.

    3. Cloud and Data Deluge
    Cloud and data deluge is the third and equally critical part of the IoT's software conundrum. The software protocol stack for the cloud communication encompasses tasks such as device configuration, file transfers, and rule-based data analysis and response.
    First and foremost, robust data analytics has a crucial role in creating the real value from data generated by the sensors, machines or things connected to the cloud. Next, there are security aspects that block unauthorized code through application whitelisting and ensure an authentic data connection to the cloud.
    Small- to mid-size IoT outfits face the huge challenge of acquiring and effectively using software tool-chains that encompass data acquisition, processing and analytics. Next up, they require a software ecosystem that can confront the highly fragmented world of IoT designs.
    That shows why the end-to-end solutions are vital in the IoT environment and why the right engineering decisions are so critical regarding the IoT software ecosystem. A new breed of design tools is required to deal with the flood of networked sensors, and it can help small- and mid-size IoT developers deal with the addition of cloud services that is tallying up the software overhead.
    Tips and Tricks:
    • Generally, cloud communication goes beyond the core expertise of many IoT product developers, so it makes sense for them to partner with a cloud-based IoT platform provider. The cloud-based IoT suite includes commercial-grade embedded software, DSKs for embedded devices, IoT reference designs, device and application APIs, and highly scalable communication services.
    • In order to be able to rapidly deploy connected devices, it is increasingly important for developers to include availability of ready-made device connectivity libraries as part of the initial technology evaluation process
    • Companies such as Atmel partners with a number of market leading providers of end-to-end cloud solutions fully capable of handling these aspects for developers. The partners of this cloud eco-system each provide their own distinctive features, making it easy to find solutions that fit particular use cases and needs.
    0

    添加评论

  6.  by Bruno Tolla, Ph.D., Denis Jean and Xiang Wei, Ph.D.
    Several performance attributes must be considered under challenging thermal conditions.
    The design of fluxes for a selective soldering application poses unique problems due to the localization of the soldering process. Both the heat treatment and the scrubbing action of the flux residue by the solder wave are confined in the soldered area. To address this specific issue, the flux formulator follows two complementary strategies.

    First, the physical characteristics of the flux are optimized in synergy with the application process to minimize its footprint on the board. The flux must work in concert with the drop jet dispensing head to flow seamlessly (e.g., no clogging) during the entire operation, localize the deposit and, finally, stay in place.
    Dispensing process parameters (open time, frequency, robot speed), as well as the board preheat temperature, are critical parameters,1 and their optimal settings depend on the characteristics of the flux (viscosity, surface tension, solid content, solvent).

    Assembly materials also play an important part, as the optimal surface energy of the solder mask is typically lower than that for the conventional wave soldering process (35mN/m vs >50mN/m) in order to prevent excessive bleeding of the flux on the board after deposition. Hence, the design of a selective soldering flux is a good illustration of the mandatory collaboration between the formulators, equipment and assembly materials manufacturers from the very beginning of the design process.

    Second, the flux chemical package is formulated to minimize the impact of unavoidable spreading and splashing events. These will result in partially heated flux residues, which won’t be removed by the washing action of the solder. As such, they pose a serious threat to assembly reliability, as ionic residues can induce electrochemical migration, corrosion and resistance losses, which could result in the in-field failure of the assembly when exposed to a moist environment.2,3 It is therefore of paramount importance to establish a correlation of the thermal history of the flux with the reliability of the residues. From this perspective, a series of activator packages has been designed specifically to guarantee an optimal reliability when partially heated.4 The reliability of the fluxes designed for the selective soldering application was assessed using common industry standards, where the flux was subjected to various thermal conditioning (TABLE 1).
    TABLE 1. Flux Reliability Testing Methods
    kesterTable1

    A statistically significant experimental protocol was conducted on Ersa selective soldering equipment to evaluate the impact of materials and process parameters on the following response factors: dispensing performance (clogging and satellites), flux spread and soldering performance. The results of these experiments are reported in the following paragraphs.
    Flux spread. Flux spread is influenced by the surface tension of the flux and its temperature.  Alcohol-based fluxes have a much lower surface tension than VOC-free fluxes, which are water-based (22mN/m at 25˚C vs. 72mN/m). Also, the dispensing temperature will tend to favor spreading by lowering the flux viscosity.

    On the other hand, the impact of the board preheat will depend on the nature of the flux; VOC-free fluxes tend to spread more on warmer boards, while an alcohol-based board will show an opposite trend as the temperature thinning effect competes with the high drying rate of the flux. Finally, the surface energy of the solder mask is another critical parameter; lower surface energies are favored for selective soldering fluxes compared with conventional wave soldering fluxes (35mN/m vs > 50mN/m) to increase the contact angle of the flux on the substrate. This is easily understood when looking at the balance of surface
    tensions modeled in Young’s equation: γSG=γSL+γLGcosθ. It should be noted preheating the board may impact the surface energy of the solder mask.

    We estimate the intrinsic spreading of fluxes by deposition on a representative set of PCBs with various solder mask types. In contrast, in-process optimizations of the flux spread are conducted by directing the drop jet on the shiny side of an aluminum foil, which presents a comparable surface energy, and measuring the dried deposits after preheat as illustrated in FIGURE 1.
    kester1
    FIGURE 1. Flux spread measurement (Left: PCB; right: Al foil).
    Drop jet dispense: clogging and satellites. High-frequency drop jet technology has been developed to narrow the spray pattern compared to atomizing-type aerosol spray heads or ultrasonic spray fluxers. The deflection of the flux droplets is minimized, but occurrence of satellites is always a possibility; these very small flux droplets varying in sizes appear in random directions outside of the flux direct deposition corona. They depend on the flux physicochemical characteristics (viscoelastic properties, surface tension), hence its formulation, coupled with the jetting process itself. It is critical to mitigate the formation of satellites, as these side deposits won’t be exposed to the same heat cycle and solder scrubbing mechanism as non-deflected droplets, and will therefore pose a serious threat of electrochemical migration under bias in a moist environment.

    Another processing issue frequently encountered during dispense is the clogging of the drop-jet fluxer, as a result of the narrow channels of the spray head (typically 130µm) combined with the high-volatility of alcohol-based selective soldering fluxes.

    Both clogging and satellite formation are assessed during the same set of experiments, which enable us to screen our flux formulation efficiently on industrial selective soldering equipment. Fax paper is used to identify the location of the geometry of droplet deposits. It should be noted that use of fax paper for in-process spread measurement is typically avoided, as the absorption of flux into the fabric of the substrate will result in an inaccurate representation of the flux spread data.

    The tests consist of successive deposition cycles executed at increasing time intervals. Twenty dots are printed at 2 sec. intervals in one cycle. This sequence is repeated four times, with 30 sec. breaks between each cycle. The whole procedure is then repeated four times, this time with 15 min. breaks in between. The dot geometries and satellite positions are computed for the 500 dots deposited in total using image analysis software. The successive breaks during the sequence make the procedure particularly aggressive. In our experience, a continuous flow of flux through the drop jet unit self-regenerates the head, while the breaks give time for the deposits to accumulate, ripen and create solid deposits on the side, which are more difficult to remove when the sequence is restarted. In consequence, fluxes can be efficiently discriminated through this experimental protocol.

    FIGURE 2 shows a representative set of deposition patterns obtained with various fluxes: the flux #2 deposition conditions were near perfect, while #10, #6 and #13 show multiple defects. Observe that satellites and variations in surface area of the deposit often happen at the beginning of the sequence, where
    deposit concentration is at its maximum, which demonstrates this effect is the major root cause for dispensing failures.
    kester2
    FIGURE 2. Typical deposition patterns for the drop jet dispensing test.

    Overall, 15 formulas were screened using the statistically significant set of data generated during these complex  deposition sequences. Results of the deposition pattern analysis are reported in FIGURE 3. The efficiency of this screening method is confirmed, as large differences are observed between fluxes, in terms of uniformity of the deposits across the whole deposition sequence (dot area and dot circle), as well as the occurrence and spatial distribution of the satellites. Fluxes #1, 2, 3 and 4 present the best dispensing performance and were selected for the final soldering performance assessment.
    kester3
    FIGURE 3. Dispensing test results for 15 experimental formula. Statistical analysis of 500 data points per formula, collected through an image analysis software.
    Soldering performance. A performance evaluation was completed on an industrial selective wave soldering machine from Ersa (FIGURE 4), using 93 mil FR-4 boards made of four copper layers (1/2/2/1 oz.), solder mask over bare copper (SMOBC) and an OSP finish (FIGURE 5).
    kester4
    FIGURE 4. Selective soldering equipment configuration.
    kester5
    FIGURE 5. Selective soldering testing board.
    These boards were populated with 16-pin dual inline packages (DIPs) containing IC chips and 96-pin Eurocard connectors.

    To determine the soldering performances of the fluxes, the L16 Taguchi design of experiment reported in TABLE 2 was used.  
    TABLE 2. Soldering Performance DoE parameters
    kesterTable2

    The response factors were flux spread (measured area), % hole fill (evaluated by x-ray diffraction), and the number of solder bridges and solder balls (visual count).
    The statistical analysis of the results is reported in FIGURE 6. Main effects are represented here, as second-order interactions were found to be statistically insignificant.
    kester6
    FIGURE 6. Soldering performance DoE results.

    All fluxes presented similar spreading results, the only impactful process parameter being the dispensed volume. Hole fill performance was found to be comparable between fluxes, while particular attention needs to be given to the board preheat temperature. This result is in good agreement with our background knowledge.  Preheat times are relatively long in point-to-point selective soldering applications, which poses a challenge to the thermal stability of the activator packages. In this context, very satisfactory performance is found with all fluxes for a preheat temperature of 110˚C. More flux discrimination was found when considering soldering defects. Flux #1 clearly stands out compared with the three other fluxes, with minimal defect rates observed in all conditions. The strong impact of board preheat temperature on defects confirms our initial interpretation on the activators’ thermal stability envelope.

    Conclusion
    The design of high-performance fluxes for selective soldering applications requires a combination of formulation, application and equipment expertise that mandates a strong partnership between flux designer and equipment manufacturer. Multiple performance aspects have to be taken into account. The flux itself must have proven reliability (corrosion, electrochemical migration) under various heat exposure conditions, in particular when only partially activated. Having down-selected a series of fluxes filling this requirement, it is necessary to conduct statistically designed experiments on industrial wave soldering machines to map the relationships between flux characteristics and selective process friendliness. In this area, multiple performance attributes are considered: compatibility with drop jet dispensing (clogging effects, cleaning frequency, and satellite formation), spreading on the board (in actual processing conditions, with multiple solder resist types) and soldering performance (fluxing activity, thermal stability) as measured by barrel filling and defect production.
    2

    查看评论

  7. Open any magazine and it’s clear that applications for 3D printing are exploding. Yet one area that remains largely unexplored is the use of additive manufacturing for electronics. The convergence of electronics and 3D printing will have staggering implications for the electronics industry—particularly around printed circuit boards and rapid prototyping.

    Not surprisingly, the 3D printed electronics space is in its infancy, more or less at the same level of adoption as regular 3D prototyping was in 2009. But its slow adoption is not from a lack of interest or need; rather, it’s because creating 3D printers for PCBs is exceedingly complex and existing inks and printers just weren’t up to the challenge. These printers must be able to print conductive traces, which is the domain of printed electronics and produce components that meet the demanding performance requirements of aerospace, defense, consumer electronics, Internet of Things and even wearables.

    Printer nuances
    Certainly, there already are some 3D printers capable of including some conductive traces by embedding basic wiring by extruding of conductive filaments. The end result of these types of printing techniques is a low-resolution, point-to-point conductive trace that may be suitable for hobbyists but not for professional electronics. Higher resolution and higher conductivity that meets the needs of professional electronics requires more advanced printing solutions and materials.

    Other actual conductive circuit printer systems are available today. They are designed to print conductive traces on one and sometimes both sides of a substrate, creating two-sided PCBs. These printed electronics are not the same as 3D printed electronics, however, which builds up a PCB on a substrate with layer after layer of material, creating a true multi-layer, interconnected, 3D-printed circuit board. To 3D-print electronics requires advanced materials and highly specialized equipment.

    3D printers and materials for PCBs
    Developing systems for true 3D-printed electronics involves creating exceedingly precise hardware with three axes: X, Y and Z. It also requires using specialty inks that are engineered at the nanoparticle level. The final element needed is advanced software that ties it all together, including the ability to effortlessly convert standard PCB Gerber design files—which are designed for 2D manufacturing environments—into 3D printable files. This allows for the 3D printer to print the substrate to the required thickness, leave and fill holes where vias are required, and more. Software for the design and validation of freeform circuit geometries isn’t yet readily available in the marketplace but will open up further electronics design abilities.

    Still, despite the complexities of building such 3D printers, the benefits of using them are obvious for electronics and other industries. PCB designers and electronics engineers are eager for the first 3D printers for professional printed electronics to emerge. My company will answer that call when the Nano Dimension DragonFly 2020 3D Printer, which we’ve been demonstrating at shows including CES 2016, becomes available commercially later this year. It is anticipated to be the first entrant into this new class of high resolution enterprise 3D printers.

    Practical uses and benefits for prototyping
    Interest in these highly specific 3D printers is very high. The possibility of using additive manufacturing to create professional PCBs offers manufacturers the flexibility of printing their own circuit board prototypes in-house for rapid prototyping, R&D, or even for custom manufacturing projects. While it is unlikely that 3D printers for electronics will replace all of the traditional processes for in-house development of high-performance electronic device applications, they will be particularly useful for prototyping, reducing time to build from weeks to just hours.

    Manufacturers adopting this new technology can expect a variety of gains, including cutting their time to market with new products and speeding iterations and innovation around PCBs. With a 3D PCB printer, they can even build and test PCBs in sections if they’d like.

    For many, one of the most exciting developments with this technology is that they will no longer need to send out their intellectual property to be manufactured off-site by specialist sub-contractors—which essentially puts their IP at risk. For others, the promise of rapid prototyping, significant reductions in the development costs and increased competitive edge are the most important benefits.

    But perhaps most importantly, 3D printing for circuit boards offers nearly limitless design flexibility.



    A PCB printed on a Nano Dimension 3D printer

    With traditional PCB prototyping, turnaround times of weeks or even months for multiple iterations while perfecting a design can wreak havoc on time-to-market. Given that, many designers opt for more conservative designs. Printing the PCB prototypes in-house means designers can risk being more creative without slowing the development process.

    Also, manufacturing currently requires multiple specialized (and expensive) techniques, such as precision drilling, chemical etching, plating, pressing and lamination. These techniques, which are usually outsourced to companies in Asia, could all be done easily with in-house 3D printing in just hours, even when the PCB has multiple layers and many interconnects.

    3D printing of PCBs will help to keep up with the changing needs of customers who require device miniaturization and customization.


    Continue reading at EE|Times. 
    0

    添加评论

  8. Based on discussions at DesignCon 2016 and since, I have three predictions about major changes ahead for high speed serial link systems.
    Roll out of 28 Gbps systems will be slower than expected.
    I hear that the semiconductor companies producing the CMOS devices—ASIC, FPGA or custom—are doing fine producing the silicon with acceptable performance at 28 Gbps. Figure 1 is an example of a very clean eye from a 28 Gbps TX (transmitter).
    Figure 1. Today's silicon can product clean signals at 28 Gbps, at least at the transmit end.
    Semiconductor manufacturers' ability to sell to end users designing and manufacturing systems operating with 28 Gbps links is, however, limited by the their ability to support these customers.
    A link operating at 28 Gbps, NRZ (non-return-to-zero), has to be designed with everything working almost perfectly. This data rate pushes the limits such as: low Df materials, smoother copper, wide enough lines, equalization tuned to the limit of recovering -25 dB of insertion loss, minimal reflections, via stubs shorter than 15 mils, channel-to-channel cross talk less than -50 dB, and line-to-line skew less than 6 ps over as long as 20 in.
    By themselves, each item is possible to engineer, but all of them at the same time in the same channel requires solid engineering and analysis. Not every design team is capable of this task. When the channel does not work, who do they call? The silicon provider.

    I hear that with a limited number of experienced support application engineers, the silicon providers are focusing on their large, high-end OEM customers and are limiting their sales based on which customers they have the resources to support. This may be a business opportunity for consulting engineering teams to work with silicon providers to support their customers and increase the design wins and sales of 28 Gbps capable silicon.
    There is a potential roadblock ahead for 56 Gbps PAM4 systems.
    A number of channels have been demonstrated operating at 56 Gbps with PAM4. The picks and shovels needed for PAM4 systems are in place. Most of the high-end software vendors have shown design tools for simulating PAM4. All the high-end oscilloscope and BERT (bit-error-rate tester) manufacturers have shown instruments able to measure and characterize PAM4 systems.Figure 2 shows the measured eye for a 56 Gbps PAM4 link.
    Figure 2. At the transmitter, a PAM4 signal is clean enough for all three eyes to be visible.
    It's widely believed that the advantage of going to PAM4 for 56 Gbps is so that we are only dealing with signals with an equivalent bandwidth of 28 Gbps signals. If we can design a channel for 28 Gbps at PAM2, we should be able to design one for 56 Gbps at PAM4.
    Not so fast, for there is one significant difference with PAM4. By dividing up the signal into three levels plus zero, we dropped the signal level for one bit by 1/3. The signal voltage level we have to measure is smaller. If we need a particular SNR (signal-to-noise ratio) at the receiver for an NRZ-PAM2 signal, and the signal level dropped by 10 dB, the acceptable noise level has to drop by 10 dB in PAM4. But wait, we're not done.
    In NRZ-PAM2, we need about -50 dB isolation between a channel and all other aggressors for a SNR of 20 dB. With a lower noise floor required in PAM4, this means an isolation of –60 dB. When it comes to cross talk, we still have high level signals corresponding to the fourth bit level sometimes coming out of the TX. This means the signal on the aggressor can be 3x higher than the signal of the second bit. To keep the same noise on the victim line when the aggressor has 10 dB higher strength, we need another 10 dB more isolation. This means an isolation of as low as -70 dB between the victim channel and all other aggressor channels.
    I hear that the weak link in achieving this low level of isolation is in the via field under the BGA. At this low level of crosstalk required, issues such as differential to differential coupling in the via field under the BGA and common noise to differential noise conversion in all via fields, in connectors and in channel to channel cross talk, can be showstoppers. While it may be possible, with good engineering practices and optimized pad stack design to reduce cross talk to the -50 dB level, getting to -70 dB is a major engineering effort.
    At this level, as well designed as a via area is, manufacturing variations in the fabricated board can push a system into too much cross talk.
    There are some fundamental limitations to what can be done at the board level if the package footprint is poorly designed. This puts a larger burden on the silicon providers to design the package footprint with channel to channel cross talk at the board level via field in mind. This does not play to their strengths.
    While getting one channel operating at 56 Gbps PAM4 is possible, getting hundreds of channels operating, in close proximity, at acceptable bit error ratio, maybe require heroic efforts.
    All is not doom and gloom
    I did hear of one innovation that may be the savior for high-speed serial links in copper-based interconnects. Given the increasing challenges to get a long channel operating at 28 Gbps in PAM2-NRZ or a 56 Gbps channel operating at PAM4, there may be an intermediate fix available. Every large connector company I spoke with has a practical plan to implement cabled interconnects integrated with the board to supplement laminated backplane and motherboard routing.
    The advantage of a cabled system is lower loss and less channel-to-channel cross talk. The larger circumference in the round conductors means lower conductor loss per length in a cable than on a board. While there may be lower cross talk in the cable interconnects, the cross talk in the connector and its board footprint still needs to be considered, but many of the connector companies seem very good at this.
    These solutions involve a connector system to mate between the board and an array of cables and back to the board. The idea is to route long distance, high bandwidth signals off the board, through cables and then back to the board. Figure 3 shows an example: the Firefly product from Samtec. A nice feature of the Samtec system is the integration of optical cables as well as copper cables to ease the transition to board-level optical interconnects.
    Figure 3. Samtec's Firefly interconnect system merges optical and electrical connections to improve signal integrity.
    This sort of approach, with a much lower loss at 14 GHz and 28 GHz, maybe the short-term fix to enable both a robust 56 Gbps (28 Gbaud) PAM4 or an PAM2-NRZ 56 Gbps system without the headaches of extremely high isolation requirements of a PAM4 system.
    This sort of backplane architecture moves the interconnect roadmap onto a different trajectory and may give additional headroom to copper interconnects into the next generation of data rates. With the option of also including fiber optics, it may be the “gateway drug” into the long touted optical backplane architecture of the future.
    Editor:
    0

    添加评论

  9. When we create a printed circuit board, the chances are these days that we’ll export it through our CAD package’s CAM tool, and send the resulting files to an inexpensive PCB fabrication house. A marvel of the modern age, bringing together computerised manufacturing, the Internet, and globalised trade to do something that would have been impossible only a few years ago without significant expenditure.
    Those files we send off to China or wherever our boards are produced are called Gerber files. It’s a word that has become part of the currency of our art, “I’ll send them the Gerbers” trips off the tongue without our considering the word’s origin.
    This morning we’re indebted to [drudrudru] for sending us a link to an EDN article that lifts the lid on who Gerber files are named for. [H. Joseph Gerber] was a prolific inventor whose work laid the ground for the CNC machines that provide us as hackers and makers with so many of the tools we take for granted. Just think: without his work we might not have our CNC routers, 3D printers, vinyl cutters and much more, and as for PCBs, we’d still be fiddling about with crêpe paper tape and acetate.
    An Austrian Holocaust survivor who escaped to the USA in 1940, [Gerber] began his business with an elastic variable scale for performing numerical conversions that he patented while still an engineering student. The story goes that he used the elastic cord from his pyjamas to create the prototype. This was followed by an ever-more-sophisticated range of drafting, plotting, and digitizing tools, which led naturally into the then-emerging CNC field. It is probably safe to say that in the succeeding decades there has not been an area of manufacturing that has not been touched by his work.
    So take a look at the article, read [Gerber]’s company history page, his Wikipedia page, raise a toast to the memory of a great engineer, and never, ever, spell “Gerber file” with a lower-case G.
  10. MEMS gyroscopes offer a simple way to measure angular rate of rotation, in packages that easily attach to printed circuit boards, so they are a popular choice to serve as the feedback sensing element in many different types of motion control systems. In this type of function, noise in the angular rate signals (MEMS gyroscope output) can have a direct influence over critical system behaviors, such as platform stability and is often the defining factor in the level of precision that a MEMS gyroscope can support.
       
    Therefore, “low-noise” is a natural, guiding value for system architects and developers as they define and develop new motion control systems.  Taking that value (low-noise) a step further, translating critical system-level criteria, such as pointing accuracy, into noise metrics that are commonly available in MEMS gyroscope datasheets, is a very important part of early conceptual and architectural work.  Understanding the system’s dependence on gyroscope noise behaviors has a number of rewards, such as being able to establish relevant requirements for the feedback sensing element or, conversely, analyzing the system-level response to noise in a particular gyroscope. 

    Once system designers have a good understanding of this relationship, they can focus on mastering the two key areas of influence that they have over the noise behaviors in their angular rate feedback loops: (1) developing the most appropriate criteria for MEMS gyroscope selection and (2) preserving the available noise performance throughout the sensor’s integration process. 

    Motion control basics
       
    Developing a useful relationship between the noise behaviors in a MEMS gyroscope and how it impacts key system behaviors often starts with a basic understanding of how the system works.  Figure 1 offers an example architecture for a motion control system, which breaks the key system elements down into functional blocks. The functional objective for this type of system is to create a stable platform for personnel or equipment that can be sensitive to inertial motion.  One example application is for a microwave antenna on an autonomous vehicle platform, which is maneuvering through rough conditions at a speed that causes abrupt changes in vehicle orientation.  Without some real-time control of the pointing angle, these highly-directional antennas may not be able to support continuous communication, while experiencing this type of inertial motion. 


    Figure 1: Example Motion Control System Architecture
      
    The system in Figure 1 uses a servo motor, which will rotate in a manner that is equal and opposite of the rotation that the rest of the system will experience.  The feedback loop starts with a MEMS gyroscope, which observes the rate of rotation (ωG) on the “stabilized platform.”  The gyroscope’s angular rate signals then feed into application-specific digital signal processing that includes filtering, calibration, alignment and integration to produce real-time, orientation feedback, (φE). The servo motor’s control signal (φCOR) comes from a comparison of this feedback signal, with the “commanded” orientation (φCMD), which may come from a central mission control system or simply represent the orientation that supports ideal operation of the equipment on the platform.
    0

    添加评论

我的简介
我的简介
博客归档
正在加载
“动态视图”主题背景. 由 Blogger 提供支持. 举报滥用情况.