The Entrepreneur Forum | Financial Freedom | Starting a Business | Motivation | Money | Success

Welcome to the only entrepreneur forum dedicated to building life-changing wealth.

Build a Fastlane business. Earn real financial freedom. Join free.

Join over 80,000 entrepreneurs who have rejected the paradigm of mediocrity and said "NO!" to underpaid jobs, ascetic frugality, and suffocating savings rituals— learn how to build a Fastlane business that pays both freedom and lifestyle affluence.

Free registration at the forum removes this block.

What Questions regarding new product development or manufacturing do you all have?

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
This is a new thread I am shedding light and expertise on for you all to benefit from. I hope some of the principles can be applied to your innovations, thoughts, ideas etc. Within the realms of commercialization, design, planning, control and organization are all simultaneously embedded as part of the process in making your product(s) come to fruition.

Design for manufacturability (DFM) is the process of proactively designing products to (1) optimize all the manufacturing functions: fabrication, assembly, test, procurement, shipping, delivery, service, and repair, and (2) assure the best cost, quality, reliability, regulatory compliance, safety, time-to-market, and customer satisfaction.

Concurrent Engineering is the practice of concurrently developing products and their manufacturing processes.
If existing processes are to be utilized, then the product must be design for these processes.
If new processes are to be utilized, then the product and the process must be developed concurrently.

Design for Manufacturability and Concurrent Engineering are proven design methodologies that work for any size company. Early consideration of manufacturing issues shortens product development time, minimizes development cost, and ensures a smooth transition into production for quick time to market. These techniques can be used to commercialize prototypes and research.

HOW TO DEVELOP COMMERCIALIZED PRODUCTS BY DESIGN
The ideal way to commercialize products and production systems would be to design them "right the first time" for the most optimal manufacturability, cost, quality, time, as well as functionality. Commercialization of research should include with following:

• valuable resources and time should be focused on the identified "mainstays"

• everything else can then be optimized for manufacturability, quality, reliability, part availability, and fast ramps to stable production.

• and much of that can be procured off-the-shelf, thus freeing more resources to focus on the "mainstays"/cash cows

ONE OF THE BIGGEST MISCONCEPTIONS IS HOW FEW STARTUPS/NEW PRODUCT DEVELOPERS/CREATORS ANALYZE INITIAL COSTING OF THEIR VENTURES.
For this the firm/individual etc must,

Quantify Total Cost. The more important cost is, the more important it is to measure it properly. For ambitious cost goals, cost measurements absolutely must quantify all costs that contribute to the selling price. Until company-wide total cost measurements are implemented, the design team needs to make cost decisions on the basis of total cost thinking, or for important decisions, manually gather all the costs. Since a large portion of cost savings will be in overhead, the costing must ensure that new products are not burdened with the averaged overhead charges of other products, but only the specific overhead charges that are appropriate for the innovative product.

COST DRIVERS

In any change process, there is always some “low-hanging fruit” – those opportunities to show significant gains without expending a great deal of effort. Agents of change should always look for these opportunities as success in these high-leverage areas can generate interest and support for more ambitious efforts. They also provide a good way to get the change process started in cases where there is a lack of widespread support.

In implementing activity-based costing, low-hanging fruit can often be found by identifying and measuring the cost of the organization’s major cost drivers. Cost drivers are defined as the root causes of a cost – the things that “drive” costs. Associating costs with their drivers makes cost information more accurate and relevant and encourages behavior to lower or eliminate costs.

The cost of major cost drivers can usually be found “lumped together” with the costs from a wide variety of other, unrelated cost drivers in a single pool of costs known as overhead. This pool of overhead contains all costs that cannot be defined as either direct material or direct labor. They are blended together like peanut butter, incorrectly treated as a homogeneous pool of costs, and, like peanut butter, spread around to products and customers, usually using direct labor as a knife.

This overhead pool is almost always greater that the direct labor it follows and is often greater than the direct material portion of a company’s costs. It contains costs that relate to some of the company’s most important cost drivers. Without being linked to their causes, however, these costs are very difficult to understand and manage. For example, overhead pools usually contain the cost of activities related to:

-Engineering Change Orders
-Purchasing, receiving, testing, and storing raw materials and purchased components
-Quality, scrap, rework, and other non-value-adding activities
-Moving and storing in-process inventory
-Setting up or changing over equipment
-Handling and storage of finished goods
-Picking and shipping releases and orders

Yet few companies know the cost of an ECO, the “material overhead” related to the various types of direct items they purchase, the cost of an in-process move, the cost of setting up or changing over a piece of equipment, or the cost of post-manufacturing work (like storage and fulfillment) required to meet the demands of its various customers.

The key is to identify the major cost drivers and then develop the best estimates practical to measure the costs related to the driver and quantify the driver itself.

Although accountants might not be able to identify an organization’s cost drivers, they should be intuitively obvious to the company’s experienced managers once they understand the concept. Of the short list of seven drivers noted above, at least one should be a significant issue at any manager’s manufacturing firm. By selecting the one that appears most significant and estimating “the numbers,” insights should be gained that can significantly impact the company’s thinking.

Once the connection is made between costs and their drivers, managers will be able to see the linkage between the characteristics and behavior of a product or customer and its total cost to the organization. This includes the impact of:

-Volume: high volume or low volume
-Degree of customization: standard or custom
-Part standardization: approved or preferred
-Part destination: production parts or spare parts for products that are out of production
-Distribution costs: direct or through channels
-Product age: launching or stabilized or aging (experiencing processing incompatibilities with -newer products and/or availability challenges for parts and raw materials)
-Market niches: commercial, OEM, military, medical, or nuclear (different markets have varying demands for quality, paperwork, proposals, reports, certifications, traceability, etc.).
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
The Thorough Up-Front Work

As both innovators and sourcers realize the importance of thorough up-front work, they ask what more should be done in the higher proportion of work in the conceptualization phase and how this can actually reduce the end of the time-line so much. The key elements of an optimal architecture phase are the following:

• A solid product definition defines what customers really want and minimizes the chance that may result in change orders to reflect the “new” customer needs that should have been understood and anticipated in the beginning.

Validate (and verify) assumptions. Evaluate, challenge, and dissect assumptions (thoroughly), especially those that will commit the project to a certain path.

• Diverse opinions are sorted out early with respect to diverse data about customer needs and and project assumptions.

• Regulatory compliance; Develop compliance plans for current/known regulations and identify likely scenarios regarding potential regulatory changes, commissioning research as necessary. Categorize changes that would force a requalification, especially customer-induced changes and changes needed for manufacturability. Based on that, formulate plans to minimize customer changes in the first bullet above and use Concurrent Engineering to design the product for manufacturability.

• Issues raised and resolved before proceeding further, thus minimizing:

(a) requiring expensive, risky, and time-consuming work-arounds on every build, or

(b) the chances that these issues will have to be resolved later when changes are expensive, hard to implement, and may, in turn, induce yet more changes.


• The architecture should be optimized for the minimum total cost, for designed-in quality and reliability, for manufacturability, serviceability, and for flexibility and customizability. The architecture may need to be optimized for product families, variety, extensions, next generations, contingencies, and growth.

The Design Phase Considerations and Methodologies

With the thorough up-front work done right, the actual design phase can proceed quickly and smoothly....

• Vendor/Partnerships should be arranged to predetermine vendor/partners.

Vendor/partnerships are the most efficient way to ensure the manufacturability of custom parts with concurrently designing tooling, thus minimizing ramp delays. They effectively expand the size of the team without hiring any more employees or reassigning them from other projects. This also avoids losing your scarce resources to deal with problems cause by low-bid vendors or vendors who just build-to-print whatever your sent them in a request-for-quotation.
And, contrary to common beliefs and policies, vendor/partnerships will actually lead to a lower net costs. Plain and simple.

Tooling and processing development should be started early in which all potential concepts have enough concurrent engineering to vet potential production approaches for feasibility and assure adequate production and supply chain capacity will be achieved without delays for tooling problems. Don’t wait until the part is designed to start thinking about fixtures and tooling concepts.

Part and Material availability can be assured by selecting them for availability, not just function.
Basing production designs on hard-to-get parts, which may have been selected for a proof-of-principle, may compromise order fulfillment and ultimately limit growth. Selecting parts for available needs to be done all along because availability problems are hard to remedy after qualification.

Tolerances are appropriately tight and consistently achievable at low cost and fastest throughput. Avoid basing research on excessively tight tolerances that carries into production and gets locked in by qualifications
• Skill Demands. Never exceed the capability of production-line workers in your plant or contract manufacturers. Avoid building proofs-of-concept that can only be built by highly skilled scientists, engineers, or prototype technicians because once approved, qualified, and put into production, high skill production-line workers will be needed, who may be hard to find, train, and retain, and may limit growth. Further, if not managed really well, dependence on skilled labor may cause quality vulnerabilities. raise cost, and delay the launch.

Off-the-shelf parts; as part of the design phase, you should focus valuable resources on the mainstay(s), which are what customers will buy your products for – and get the rest, as off-the-shelf, whenever possible. For example:

Customers buy electronic products for the unique, innovative features and functions they accomplish, not routine computations, controls, communications, and power supplies that are just expected to work reliably.

Customers buy mechanical products for the unique, innovative structures or motions they do, not routine motions, controls, enclosures, and structures that support the mainstays and work reliability.

What is needed from these routine support parts is adequate functionality, assured availability at any volumes, no risk, and high quality and reliability. Proven off-the-shelf parts can quantitatively assure all of these from their “track-records” which is not the case from custom-designed parts that introduce many variables, unknowns, and risks.
But, despite these opportunities, most design teams do not even consider off-the-shelf parts because of the following inhibitions that can be set straight by these principles:

• Just because a product is leading-edge doesn’t mean all the parts have to be custom. In fact, the product will be better if everyone focuses on what is really leading-edge.

• Some teams may not do a thorough enough search or not even look for “better” OTS (off the shelf) parts, assuming they will cost more. However, the total cost may be less because of the all the costs of developing and debugging the “just right size” version. If “better” is larger or heavier, this may be a consideration in specific industries, but may not be a factor for miniaturized parts like electronics.

• OTS parts may appear to cost more than in-house built parts because OEMs pay total cost for them, but in-house parts don’t include all the overhead costs because they are rarely quantified.


Bottom line: if OTS parts are not considered early enough, then arbitrary decisions preclude their use – for example, if circuitry is designed with too many voltages, that may preclude reliable off-the-shelf power supplies.


The paradox of off-the-shelf parts is that designers may have to first choose the best off-the-shelf parts and then literally design the product around them. But it may be worth it to focus finite resources and time on your key mainstays.

- Standardization. Until a company or division effort establishes standard parts lists, the project should standardize on key parts for the product, at least for the following categories before detailed design starts:

• Fasteners, which usually proliferate wildly if designers specify the just-what-is-needed size for all needs. To curtail this proliferation before it starts, you should select a baseline list of standard fasteners for the needed sizes, loads, and environments, for instance: “small, medium, and large, and maybe one or two in between.” As in all standardization, most applications will get a “better” part than needed, but no one should resist standardization because it raises a BOM (bill of materials) entry slightly because the total cost savings will be much greater.
For instance, all bolts should be standardized on the strongest grade. This provides automatic mistake-proofing benefit by preventing a weaker bolt accidentally being used where a stronger bolt should have been used
For small parts, like fasteners and integrated circuits, there would be minimal, if any, weight penalty for such standardization.

Expensive or hard-to-get parts. Standardizing on expensive parts is one of the solutions to eliminating long-lead-time part problems, which can enable steady flows of parts that will be used one way or another, borrowing from others users in emergencies, and even stocking the standard versions, none of which would be possible for a plethora of just-the-right-size versions. More advanced or higher capacity parts may weight more or take up more space, but that may be cancelled out if the more advanced part combines parts that would otherwise be many discreet parts.

The bottom line is that standardization will:

(a) help ensure part availability by design for peaks in demand, and for growth.

(b) greatly improve serviceability, repair, and maintenance while minimizing the cost and maximizing the usefulness of spare parts kits..

(c) all of these benefits will result in net total cost savings to justify some applications betting a “better” part than needed.
 
Last edited:

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
Why Cost Is Hard to Remove After Design

Cost is very difficult to remove after the product is designed. 80% of the cost is designed into the product and is very difficult to remove later. Attempting cost reduction by changing the design encounters the following very common obstacles:

• There is always the common possibility that one change may force other changes.

• Trying to significantly lower cost after production release is usually futile because of many early decisions, which severly limit opportunities.

• Finally, the total cost of doing the change may not be paid back by the cost savings within the expected life of the product. Few companies really keep track of the total costs of changing designs.

Cost Reduction Problems of Focusing Only on Parts and Labor

All “costs” initiated should be suspect unless they are based on total cost. Only measuring parts and labor puts the whole cost focus there instead of the total cost which includes many more costs normally lumped together in “overhead.” And contrary to popular myth, overhead is not fixed. If companies implement, and design products for, lean production, floor space needs – normally a “fixed” cost – can be drastically reduced.

Counterproductive effects. Focusing only on parts & labor can lead to seriously counterproductive effects. Truly low-cost products do not come from cheap parts, which are often chosen because they appear to lower the reported material costs. And to make matters worse, the internet now offers on-line part “auctions” that effectively steer manufacturers to the lowest bidder. However, cheap parts will usually explode other costs: for quality, service, operations, and other overhead costs.
Low-cost products do not result from “saving” cost by cutting product development and continuous improvement efforts. This may not be a stated policy per se but product development budgets can be impacted by corporate directives like, “all departments will reduce their budgets by 15%.”

Low Labor Rate May Not Lower Labor Cost. Moving production to “low labor rate” countries is another cost reduction mishap. Lower labor efficiency alone might cancel out anticipated labor rate savings, for instance if labor cost is one third but labor productivity is also one third. And cheap labor rarely stays that way.

Many Designs Are Needlessly Labor-Intensive. Many decisions to move to production to low-labor-rate are based on labor-intensive designs. However, effective DFM can reduce labor content to the point where moving to low-labor-rate areas can no longer be justified.

Cheap Parts and Cheap Labor Compromises Quality. Quality may suffer if the cheapest labor plant has not established an effective quality culture. Quality may also suffer if recurring defects are produced overseas and not detected until hundreds of defective products are discovered at the end of the long transoceanic journey. Overseas production also slows delivery, thus making it hard to implement build-to-order (JUST in TIME)

Cutting Corners is No Way to Cut Cost

Similarly, “cutting corners” in any manner will probably end up costing much more later, for instance for quality costs. Omitting features and cheapening the products is an unwise strategy to reduce cost. Sure, the stripped-down product may cost less, but it could ruin the company’s reputation.
 

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE

DFM Guideline P1) Adhere to specific process design guidelines.

It is very important to use specific design guidelines for parts to be produced by specific processes such as welding, casting, forging, extruding, forming, stamping, turning, milling, grinding, powdered metallurgy (sintering), plastic molding, etc.

DFM Guideline P2) Avoid right/left hand parts.

Avoid designing mirror image (right or left hand) parts. Design the product so the same part can function in both right or left hand modes. If identical parts can not perform both functions, add features to both right and left hand parts to make them the same.

Another way of saying this is to use "paired" parts instead of right and left hand parts. Purchasing of paired parts (plus all the internal material supply functions) is for twice the quantity and half the number of types of parts. This can have a significant impact with many paired parts at high volume.

DFM Guideline P3) Design parts with symmetry.

Design each part to be symmetrical from every "view" (in a drafting sense) so that the part does not have to be oriented for assembly. In manual assembly, symmetrical parts can not be installed backwards, a major potential quality problem associated with manual assembly. In automatic assembly, symmetrical parts do not require special sensors or mechanisms to orient them correctly. The extra cost of making the part symmetrical (the extra holes or whatever other feature is necessary) will probably be saved many times over by not having to develop complex orienting mechanisms and by avoiding quality problems.

It is a little know fact that in felt-tipped pens, the felt is pointed on both ends so that automatic assembly machines do not have to orient the felt.

DFM Guideline P4) If part symmetry is not possible, make parts very asymmetrical.

The best part for assembly is one that is symmetrical in all views. The worst part is one that is slightly asymmetrical which may be installed wrong because the worker or robot could not notice the asymmetry. Or worse, the part may be forced in the wrong orientation by a worker (that thinks the tolerance is wrong) or by a robot (that does not know any better).

So, if symmetry can not be achieved, make the parts very asymmetrical. Then workers will less likely install the part backward because it will not fit backward. Automation machinery may be able to orient the part with less expensive sensors and intelligence.

In fact, very asymmetrical parts may even be able to be oriented by simple stationary guides over conveyor belts.

DFM Guideline P5) Design for fixturing.

Understand the manufacturing process well enough to be able to design parts and dimension them for fixturing. Parts designed for automation or mechanization need registration features for fixturing. Machine tools, assembly stations, automatic transfers and automatic assembly equipment need to be able to grip or fixture the part in a known position for subsequent operations. This requires registration locations on which the part will be gripped or fixtured while part is being transferred, machined, processed or assembled.

DFM Guideline P6) Minimize tooling complexity by concurrently designing tooling.

Use concurrent engineering of parts and tooling to minimize tooling complexity, cost, delivery lead-time and maximize throughput, quality and flexibility.

DFM Guideline P7) Make part differences very obvious for different parts.

Different materials or internal features may not be obvious to workers. Make sure that part differences are obvious. This is especially important in rapid assembly situations where workers handle many different parts. To distinguish different parts, use markings, labels, color, or different packaging if they come individually packaged. One company uses different (but functionally equivalent) coatings to distinguish metric from English fasteners.

DFM Guideline P8) Specify optimal tolerances for a Robust Design.

Design of Experiments can be used to determine the effect of variations in all tolerances on part or system quality. The result is that all tolerances can be optimized to provide a robust design to provide high quality at low cost.

DFM Guideline P9) Specify quality parts from reliable sources.

The "rule of ten" specifies that it costs 10 times more to find and repair a defect at the next stage of assembly. Thus, it costs 10 times more cost to find a part defect at a sub-assembly; 10 times more to find a sub-assembly defect at final assembly; 10 times more in the distribution channel; and so forth. All parts must have reliable sources that can deliver consistent quality over time in the volumes required.

DFM Guideline P10) Minimize Setups. For machined parts, ensure accuracy by designing parts and fixturing so all key dimensions are all cut in the same setup (chucking). Removing the part to re-position for subsequent cutting lowers accuracy relative to cuts made in the original position. Single setup machining is less expensive too.

DFM Guideline P11) Minimize Cutting Tools. For machined parts, minimize cost by designing parts to be machined with the minimum number of cutting tools. For CNC "hog out" material removal, specify radii that match the preferred cutting tools (avoid arbitrary decisions). Keep tool variety within the capability of the tool changer.

DFM Guideline P12) Understand tolerance step functions and specify tolerances wisely. The type of process depends on the tolerance. Each process has its practical "limit" to how close a tolerance could be held for a given skill level on the production line. If the tolerance is tighter than the limit, the next most precise (and expensive) process must be used. Designers must understand these "step functions" and know the tolerance limit for each process.
 

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
Design for Assembly (DFA) techniques aim to reduce the cost and time of assembly by simplifying the product and process through such means as reducing the number of parts, combining two or more parts into one, reducing or eliminating adjustments, simplifying assembly operations, designing for parts handling and presentation, selecting fasteners for ease of assembly, minimizing parts tangling, and ensuring that products are easy to test. For example, tabs and notches in mating parts make assembly easier, and also reduce the need for assembly and testing documentation. Simple z-axis assembly can minimize handling and insertion times.

The impact of DFA will be found throughout the overall design and manufacturing process. Use of DFA to reduce the number of parts will help reduce inventory, and so will help reduce inventory management effort. As a result, it will support activities such as Just In Time (JIT) aimed at improving shop-floor performance. Use of DFA to develop modular products making use of common parts will allow the variety desired by Marketing while limiting the workload on the Manufacturing function. Modular sub-assemblies can be built and tested independently. Model variations can be created at the subsystem level.
 

Attachments

  • Capture.JPG
    Capture.JPG
    124.1 KB · Views: 4

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
Best Practice Techniques

Different people have different understandings of the term the 'Best Practice Techniques'. To keep things very basic, 'Best Practices Techniques' includes many methods of modern tests/analysis used for keeping checks and balances whilst manufacturing. Some of these have been in existence for years but still appear as modern compared to the very traditional methods used by many organizations. They include techniques such as Benchmarking, Design for Assembly (DFA), Failure Modes Effects and Criticality Analysis (FMECA), Activity Based Costing (ABC) and Taguchi techniques.

Bench-marking is the continuous process of measuring products, services, and practices against a product development organization's toughest competitors or those renowned as industry leaders. If the other organizations are found to have more effective operations, then the product development organization can work out why they are better, then start to improve its own operations.

DFA techniques aim to reduce the cost and time of assembly by simplifying the product and process through such means as reducing the number of parts, combining two or more parts into one, simplifying assembly operations, designing for parts handling, selecting fasteners for ease of assembly and ensuring that products are easy to test. Design for Manufacture (DFM) techniques are closely linked to DFA techniques, but are more oriented to individual parts and components rather than to DFA's sub-assemblies, assemblies, and products. DFM aims to eliminate the unnecessary features of a part that make it difficult and expensive to manufacture.

FMECA/PFMEAs, are quality tools which can be applied to systems, products, manufacturing processes and equipment is used to identify the possible ways in which failure can occur, the corresponding causes of failure, and the corresponding effects of failure.

ABC is a costing technique used to overcome deficiencies of traditional product costing systems which may calculate inaccurate product costs. The reason for these errors is often that the attributes chosen to characterize costs related to a particular product are attributes of unit products (such as direct labor hours per product) whereas many costs (such as set-up time) are related to batches of products. ABC is based on the principle that it is not the products that generate costs, but the activities that are performed in planning, procuring and producing the products. It is the resources that are necessary to support these activities that result in costs being incurred. ABC calculates product costs by determining the extent to which a product makes use of the activities.

Taguchi identified three phases in product design - system design, parameter design and tolerance design. During system design, overall conceptual design takes place. Theoretical knowledge and practical experience are used to ensure the product should function with the required behavior. Features and functions, including materials, parts and tentative parameter values are selected. During parameter design, target values of product and process parameters should be chosen so as to minimize variability. As there is often no exact theoretical relationship between design parameters and fluctuations in product behavior, the only way to find out values of design parameters that minimize variability is to experiment to show how the factors causing fluctuation affect performance. Due to the large number of parameters and combinations possible, it is usually impractical to investigate all possible combinations. Taguchi's experimental design techniques allow designers to experiment with a large number of variables with relatively few experiments.

 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
Efficient product costing and inventory valuation are much emphasized in today’s manufacturing environment because of their importance in company decisions. However, especially volatile raw material prices bring challenges to reliability of product costing and inventory valuation.

Importance and Challenges of Valuing Raw Material Inventory
Inventory is often the largest and most important asset that a company owns. As an asset, inventory has a direct impact on profitability of the company and especially on reporting the company’s success in the balance sheet. Inventories appear on the balance sheet and the income statement under heading current assets. So, inventory valuation affects both the profitability and committed capital of the company. In each accounting period appropriate expenses must be matched with revenues in order to determine appropriate income. In inventory accounting this includes determining cost of goods sold that should be deducted from sales. That’s why the net income depends directly on inventory valuation.

A major challenge in inventory valuation is volatility of raw material price and because of this volatility there are major differences between the inventory value and the budgeted value. That’s why it is important to separate out total variance into planning variance and operational variance. 9 Planning variances seek to explain how original standards need to be adjusted in order to reflect changes in operating conditions (raw material price changes) between current situation and the time when the standard was originally calculated. In effect it means that the original standard is updated so that it is a realistic target in current conditions. Operational variances indicate the extent to which attainable targets (the adjusted standards) have been achieved. Operational variances are thus a realistic way of assessing performance.

There are basically two general approaches to classify cost variances for controlling purposes. First, there is an approach that classifies all variances as expenses. In this approach any savings or expenses above or below normal are abnormal. If a management sees that for control purposes inventory at cost of standard price reflects better the situation in the company, then it is reasonable to classify variances as period expenses. Second, especially for financial reporting and accounting purposes, there is an actual costing method. The variances in this method are prorated to inventories and cost of sales.

The most common method to allocate variances in overhead costs is to assign those to cost of goods sold. Another way is to assign the variances in overheads to production accounts, which are work-in-progress, finished goods not sold and finished goods sold.

Many people and departments within an organization impact product cost:
  • An engineering team decides on a specific design, but there are multiple alternatives that meet the same form, fit, and functional requirements. Each dictates a different cost.
  • A sourcing team pays to produce a specific design, but there are multiple potential costs for manufacturing the design. Manufacturing costs are often negotiable and depend on plant cost structure, capabilities, and process control.
  • A manufacturing team selects one way to produce a specific design and estimates a ballpark cost, but there may be several more cost-effective ways to manufacture the same design.
The benefits of a systematic product cost management (PCM) program are significant, yet many manufacturers struggle to implement these initiatives effectively. This article discusses some key principles to guide and execute an effective PCM program for maximum impact.

Traditionally, PCM has been performed by cost engineering experts, or by Value Analysis/Value Engineering (VAVE) team members who specialize in cost reduction and/or support core business functions. These resources typically have strong manufacturing backgrounds and may have worked as a supplier quote estimator. Their expertise is unique and their domain knowledge builds over time, but it is extremely difficult to duplicate and scale across products in a large organization.

Effective PCM requires a set of systematic activities, processes, and tools for use throughout the enterprise to guide the above decisions to the lowest possible costs. This enables manufacturing organizations to attack cost at the point of origin and yield the greatest impact on product cost reduction.

The core activities above fit into various functions and processes over a product's life cycle and include key Cost Control Points during the overall development process. These are measurable, managed checkpoints that dictate where and when people should perform the activities outlined above. The output and results of these activities build on each other throughout the product development lifecycle. For example, during the introduction of a new product, there are typically design review meetings at regular intervals to ensure the new product is meeting form, fit, and functional requirements. However, rarely is there a conversation about the financial implications of the design alternatives being evaluated. An effective PCM effort should include mandatory cost evaluation as part of key design review milestones.

Another example would be as a design reaches the release to manufacturing (RTM) milestone. At this point in the process, there is often a decision to make or buy that product, or key components within it. A company with a cost control point at that RTM milestone would quickly calculate the financial impact of both options, and make an economically-wise decision in a fraction of the time that it would take to create and manage an RFP response from a supplier.
 

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
A lot of factories (big and small) are now incorporating prototyping with 3D printing to cut costs; Here is just a primer of 3d printing for those of you that wish to go this route. The relatively low tooling costs, coupled with simplicity of changing designs/edits during the primary design phase makes this a lucrative option.

3D Printing Processes
Stereolithography

stereolithography.png




Stereolithography (SL) is widely recognized as the first 3D printing process; it was certainly the first to be commercialised. SL is a laser-based process that works with photopolymer resins, that react with the laser and cure to form a solid in a very precise way to produce very accurate parts. It is a complex process, but simply put, the photopolymer resin is held in a vat with a movable platform inside. A laser beam is directed in the X-Y axes across the surface of the resin according to the 3D data supplied to the machine (the .stl file), whereby the resin hardens precisely where the laser hits the surface. Once the layer is completed, the platform within the vat drops down by a fraction (in the Z axis) and the subsequent layer is traced out by the laser. This continues until the entire object is completed and the platform can be raised out of the vat for removal.

Because of the nature of the SL process, it requires support structures for some parts, specifically those with overhangs or undercuts. These structures need to be manually removed.

In terms of other post processing steps, many objects 3D printed using SL need to be cleaned and cured. Curing involves subjecting the part to intense light in an oven-like machine to fully harden the resin.

Stereolithography is generally accepted as being one of the most accurate 3D printing processes with excellent surface finish. However limiting factors include the post-processing steps required and the stability of the materials over time, which can become more brittle.



DLP

dlp1.png




DLP — or digital light processing — is a similar process to stereolithography in that it is a 3D printing process that works with photopolymers. The major difference is the light source. DLP uses a more conventional light source, such as an arc lamp, with a liquid crystal display panel or a deformable mirror device (DMD), which is applied to the entire surface of the vat of photopolymer resin in a single pass, generally making it faster than SL.

Also like SL, DLP produces highly accurate parts with excellent resolution, but its similarities also include the same requirements for support structures and post-curing. However, one advantage of DLP over SL is that only a shallow vat of resin is required to facilitate the process, which generally results in less waste and lower running costs.



Laser Sintering / Laser Melting


sintering.png




Laser sintering and laser melting are interchangeable terms that refer to a laser based 3D printing process that works with powdered materials. The laser is traced across a powder bed of tightly compacted powdered material, according to the 3D data fed to the machine, in the X-Y axes. As the laser interacts with the surface of the powdered material it sinters, or fuses, the particles to each other forming a solid. As each layer is completed the powder bed drops incrementally and a roller smoothes the powder over the surface of the bed prior to the next pass of the laser for the subsequent layer to be formed and fused with the previous layer.

The build chamber is completely sealed as it is necessary to maintain a precise temperature during the process specific to the melting point of the powdered material of choice. Once finished, the entire powder bed is removed from the machine and the excess powder can be removed to leave the ‘printed’ parts. One of the key advantages of this process is that the powder bed serves as an in-process support structure for overhangs and undercuts, and therefore complex shapes that could not be manufactured in any other way are possible with this process.

However, on the downside, because of the high temperatures required for laser sintering, cooling times can be considerable. Furthermore, porosity has been an historical issue with this process, and while there have been significant improvements towards fully dense parts, some applications still necessitate infiltration with another material to improve mechanical characteristics.

Laser sintering can process plastic and metal materials, although metal sintering does require a much higher powered laser and higher in-process temperatures. Parts produced with this process are much stronger than with SL or DLP, although generally the surface finish and accuracy is not as good.



Extrusion / FDM / FFF

FFF-Extrusion.png




3D printing utilizing the extrusion of thermoplastic material is easily the most common — and recognizable — 3DP process. The most popular name for the process is Fused Deposition Modelling (FDM), due to its longevity, however this is a trade name, registered by Stratasys, the company that originally developed it. Stratasys’ FDM technology has been around since the early 1990’s and today is an industrial grade 3D printing process. However, the proliferation of entry-level 3D printers that have emerged since 2009 largely utilize a similar process, generally referred to as Freeform Fabrication (FFF), but in a more basic form due to patents still held by Stratasys.

The process works by melting plastic filament that is deposited, via a heated extruder, a layer at a time, onto a build platform according to the 3D data supplied to the printer. Each layer hardens as it is deposited and bonds to the previous layer.

Stratasys has developed a range of proprietary industrial grade materials for its FDM process that are suitable for some production applications. At the entry-level end of the market, materials are more limited, but the range is growing. The most common materials for entry-level FFF 3D printers are ABS and PLA.

The FDM/FFF processes require support structures for any applications with overhanging geometries. For FDM, this entails a second, water-soluble material, which allows support structures to be relatively easily washed away, once the print is complete. Alternatively, breakaway support materials are also possible, which can be removed by manually snapping them off the part. Support structures, or lack thereof, have generally been a limitation of the entry level FFF 3D printers. However, as the systems have evolved and improved to incorporate dual extrusion heads, it has become less of an issue.

In terms of models produced, the FDM process from Stratasys is an accurate and reliable process that is relatively office/studio-friendly, although extensive post-processing can be required. At the entry-level, as would be expected, the FFF process produces much less accurate models, but things are constantly improving.

The process can be slow for some part geometries and layer-to-layer adhesion can be a problem, resulting in parts that are not watertight. Again, post-processing using Acetone can resolve these issues.



Inkjet

There are two 3D printing process that utilize a jetting technique.

inkjet-binder.png




Binder jetting: where the material being jetted is a binder, and is selectively sprayed into a powder bed of the part material to fuse it a layer at a time to create/print the required part. As is the case with other powder bed systems, once a layer is completed, the powder bed drops incrementally and a roller or blade smoothes the powder over the surface of the bed, prior to the next pass of the jet heads, with the binder for the subsequent layer to be formed and fused with the previous layer.

Advantages of this process, like with SLS, include the fact that the need for supports is negated because the powder bed itself provides this functionality. Furthermore, a range of different materials can be used, including ceramics and food. A further distinctive advantage of the process is the ability to easily add a full colour palette which can be added to the binder.

The parts resulting directly from the machine, however, are not as strong as with the sintering process and require post-processing to ensure durability.

inkjet2.png


Material jetting: a 3D printing process whereby the actual build materials (in liquid or molten state) are selectively jetted through multiple jet heads (with others simultaneously jetting support materials). However, the materials tend to be liquid photopolymers, which are cured with a pass of UV light as each layer is deposited.

The nature of this product allows for the simultaneous deposition of a range of materials, which means that a single part can be produced from multiple materials with different characteristics and properties. Material jetting is a very precise 3D printing method, producing accurate parts with a very smooth finish.



Selective Deposition Lamination (SDL)

sdl.png




SDL is a proprietary 3D printing process developed and manufactured by Mcor Technologies. There is a temptation to compare this process with the Laminated Object Manufacturing (LOM) process developed by Helisys in the 1990’s due to similarities in layering and shaping paper to form the final part. However, that is where any similarity ends.

The SDL 3D printing process builds parts layer by layer using standard copier paper. Each new layer is fixed to the previous layer using an adhesive, which is applied selectively according to the 3D data supplied to the machine. This means that a much higher density of adhesive is deposited in the area that will become the part, and a much lower density of adhesive is applied in the surrounding area that will serve as the support, ensuring relatively easy “weeding,” or support removal.

After a new sheet of paper is fed into the 3D printer from the paper feed mechanism and placed on top of the selectively applied adhesive on the previous layer, the build plate is moved up to a heat plate and pressure is applied. This pressure ensures a positive bond between the two sheets of paper. The build plate then returns to the build height where an adjustable Tungsten carbide blade cuts one sheet of paper at a time, tracing the object outline to create the edges of the part. When this cutting sequence is complete, the 3D printer deposits the next layer of adhesive and so on until the part is complete.

sdl-process.png


SDL is one of the very few 3D printing processes that can produce full colour 3D printed parts, using a CYMK colour palette. And because the parts are standard paper, which require no post-processing, they are wholly safe and eco-friendly. Where the process is not able to compete favourably with other 3D printing processes is in the production of complex geometries and the build size is limited to the size of the feedstock.
 

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
Introduction
Additive manufacturing (sometimes referred to as rapid prototyping or 3D printing) is a method of manufacture where layers of a material are built up to create a solid object. While there are many different 3D printing technologies this article will focus on the general process from design to the final part. Whether the final part is a quick prototype or a final functional part, the general process does not change.

photo8.jpg

From initial CAD design to 3D printed part the additive manufacturing follows a general series of steps
Additive manufacturing process
1. CAD
Producing a digital model is the first step in the additive manufacturing process. The most common method for producing a digital model is computer-aided design (CAD). There are a large range of free and professional CAD programs that are compatible with additive manufacture. Reverse engineering can also be used to generate a digital model via 3D scanning.

There are several design considerations that must be evaluated when designing for additive manufacturing. These generally focus on feature geometry limitations and support or escape hole requirements and vary by technology.

2. STL conversion and file manipulation
A critical stage in the additive manufacturing process that varies from traditional manufacturing methodology is the requirement to convert a CAD model into an STL (stereolithography) file. STL uses triangles (polygons) to describe the surfaces of an object.

Once a STL file has been generated the file is imported into a slicer program. This program takes the STL file and converts it into G-code. G-code is a numerical control (NC) programming language. It is used in computer-aided manufacturing (CAM) to control automated machine tools (including CNC machines and 3D printers). The slicer program also allows the designer to customize the build parameters including support, layer height, and part orientation.

3. Printing
3D printing machines often comprise of many small and intricate parts so correct maintenance and calibration is critical to producing accurate prints. At this stage, the print material is also loaded into the printer. The raw materials used in additive manufacturing often have a limited shelf life and require careful handling. While some processes offer the ability to recycle excess build material, repeated reuse can result in a reduction in material properties if not replaced regularly.

Most additive manufacturing machines do not need to be monitored after the print has begun. The machine will follow an automated process and issues generally only arise when the machine runs out of material or there is an error in the software.

4. Removal of prints
For some additive manufacturing technologies removal of the print is as simple as separating the printed part from the build platform. For other more industrial 3D printing methods the removal of a print is a highly technical process involving precise extraction of the print while it is still encased in the build material or attached to the build plate. These methods require complicated removal procedures and highly skilled machine operators along with safety equipment and controlled environments.

5. Post processing
Post processing procedures again vary by printer technology. SLA requires a component to cure under UV before handling, metal parts often need to be stress relieved in an oven while FDM parts can be handled right away. For technologies that utilize support, this is also removed at the post-processing stage. Most 3D printing materials are able to be sanded and other post-processing techniques including tumbling, high-pressure air cleaning, polishing, and coloring are implemented to prepare a print for end use.
 

Attachments

  • Additive+Manufacturing+Infographic.pdf
    863.1 KB · Views: 1

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
This is just a base primer on patenting​

Before you invest another dollar or minute of your time, use this search engine to make sure your idea hasn't already been patented
File Online

Fees vary depending on the type of patent application you submit. Fees may also vary according to the way you "claim" your invention. More information on filing fees and the number and type of claims.

There are three basic fees for utility patents:

  • The filing fee, which is non-refundable whether or not a patent is granted. (This is the cost to have your invention "examined" by the US Patent and Trademark Office - remember, you may or may not get a patent!)
  • The issue fee (you pay this only if your application is allowed)
  • Maintenance fees (paid at 3 1/2, 7 1/2, and 11 1/2 years after your patent is granted - these fees "maintain" your legal protection).
  • Additional fees may be required.
USPTO Fee Schedule

There are three kinds of patents available through the U.S. Patent and Trademark Office(USPTO):

1. Utility patent: protects a new or useful invention

By law, inventors can only obtain utility patents on specific kinds of inventions. In general, inventors cannot patent unmodified natural products, abstract ideas or algorithms unconnected to real world applications.

2. Provisional patent: secures a temporary, one-year patent-pending status

The inventor must file a utility patent application before the end of the year to maintain patent pending as of the provisional filing date.

3. Design patent: protects an ornamental design

Design patent applications are only for ornamental design. Design patents cannot protect any functional benefit that the design may confer.

The USPTO charges fees based on the size of the applicant. Large companies need to pay more.

Step 2. Document, Document, Document

Inventing happens in two steps: 1) conceiving the invention, and 2) reducing it to practice. Be sure to document both steps. For example, if your invention is a new machine made from combining two existing machines, then you must document when you had the idea to combine the machines to show conception.

Reduction to practice means taking that idea and making it work. For the combination of the two machines, you must document how to successfully combine them. Include proof the invention works and some alternative approaches. Include a schematic, drawing or photo of the combined machine and possible alternative ways of combination.

Step 3. Keep Your Idea Confidential

Patents require absolute novelty, meaning that any public disclosure will compromise any future patent filing. Your disclosure of the invention is just as problematic as another inventor or scientist publishing similar results.

In the United States, an inventor has one year to file a patent application after making a public disclosure. But no other country gives a similar grace period—the minute you breach secrecy most worldwide patent rights are gone. Once your patent application is filed then the patent is pending and you are safe to discuss your invention publicly.

Only you can decide whether or not you have enough to hire a patent lawyer. Costs for patent lawyers are anywhere from 3500-15000 for startups.

As a result, you may have a harder time enforcing your patent against competitors because your description did not take the time to expand on your invention. A competitor may find an easy work-around. Patents are filled with tiny details and getting any one of them wrong may compromise your patent.

You can still do a lot. Provisional applications, for example, lack many of the formalities of utility patent applications. You can draft and file the provisional application yourself using the USPTO’s online web portal. If you file patent yourself, ask a lawyer to gently review it before your file it. It is less expensive than paying for hours of a lawyer’s time to write the application, but still gets the benefit of the lawyer’s experience.

Some choose to draft and file their own utility patent application. To do so, you could find a related patent and use it as a template to draft your application. Make your own draft drawings by tracing photographs of your prototype. Include all the relevant references you found in your prior art search. Write your own claims to differentiate your invention from the prior art you found. Even if a lawyer ends up filing your application for you, you will have gone along way to ensuring your patent application accurately reflects your invention.

here is just a basic timeframe chart;

View attachment 20807[/QUOTE]
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
Today’s segment is all about Generative Design; what it is and why it is poised to be the future of manufacturing.

Using artificial intelligence based software and the computing power of the cloud, design engineers are able to create thousands of design options by simply defining their design problem - inputting basic parameters such as height, weight it must support, strength, and material options. This is a revolutionary principle that can lead to faster design times and speediness to manufacturing of products.

Several companies in the world, including Airbus, Under Armour and Stanley Black & Decker, are using generative design to solve engineering challenges and come up with design solutions that the human mind may never be able to conceive on their own. With generative design, engineers are no longer limited by their own imaginations or past experience. Instead, they are collaborating with technology to co-create more, better, with less: more new ideas, products that better meet the needs of users, in less time and with less negative impact on the environment.


Generative design leverages machine learning to mimic nature’s evolutionary approach to design. Designers or engineers input design parameters (such as materials, size, weight, strength, manufacturing methods, and cost constraints) into generative design software and the software explores all the possible combinations of a solution, quickly generating hundreds or even thousands of design options. From there, the designers or engineers can filter and select the outcomes to best meet their needs.

Imagine if instead of starting a “drawing” or CAD design based on what you already know or ideas that are in your head, you could tell a computer what you want to accomplish or what problem you are trying to solve. For example, say you want to design a chair. Instead of drawing two or three options (maybe 10 if you’re really creative), you can tell the computer you want a chair that supports X amount of weight, costs X much, and uses X material. The computer can then deliver hundreds, if not thousands, of practically and easily manufacturable design options that all meet that criteria and are likely options that you could not conceive on your own. This is an example of permutations of a design.


B25643D5-CAE7-4EE3-B7ED-EA22972975E6.jpeg

With generative design, the simulation is built into the design process. You can specify manufacturing methods like additive, CNC, casting, etc. at the outset and the software only produces designs that can be fabricated with your specified manufacturing method. Or you can explore designs for multiple manufacturing methods.
Another often overlooked benefit of generative design is the ability to consolidate parts. Because generative design can handle a level of complexity that is impossible for human engineers to conceive – and because additive manufacturing can enable the fabrication of the complex geometries that generative algorithms often produce – single parts can be created that replace assemblies of 2, 3, 5, 10, 20 or even more separate parts. Consolidating parts simplifies supply chains, maintenance and can reduce overall manufacturing costs.

With its ability to explore thousands of valid design solutions, built-in simulation, awareness of manufacturability and part consolidation, the reality is that generative design impacts far more than just the traditional notion of design. It’s really about the entirety of the manufacturing process.





 

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE

Attachments

  • NAMBeltonRegulatoryStudyPolicy.pdf
    1 MB · Views: 5
  • RegReformStudyOnePager.pdf
    590.3 KB · Views: 2

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
Hello all,
for anyone sourcing within the automotive sectors here are the differences when scoping your suppliers for ISO certifications. Those that know about this are doing their due diligence and will have better ease at streamlining supplier quality for the long run.
 
Dislike ads? Remove them and support the forum: Subscribe to Fastlane Insiders.

Attachments

  • Comparison-of-Requirements.pdf
    1.1 MB · Views: 3

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE
after talking with several of you on the forum, i will cover more about DFM DFA as well as other sourcing-manufacturing needs and issues that stem throughout the process.
here is a good pdf explaining DFM protocols and principles that will enable you to get a feel of what to expect when manufacturing on a small or large scale.
 

Attachments

  • dfm.pdf
    515.6 KB · Views: 5

Niptuck MD

plutocrat-in-training
Read Rat-Race Escape!
Read Fastlane!
Read Unscripted!
Summit Attendee
Speedway Pass
User Power
Value/Post Ratio
164%
Aug 31, 2016
1,421
2,330
NORWAY - POLAND - WEST EUROPE

Post New Topic

Please SEARCH before posting.
Please select the BEST category.

Post new topic

Guest post submissions offered HERE.

Latest Posts

New Topics

Fastlane Insiders

View the forum AD FREE.
Private, unindexed content
Detailed process/execution threads
Ideas needing execution, more!

Join Fastlane Insiders.

Top