2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on...

14
2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017 Blind 3D Mesh Watermarking for 3D Printed Model by Analyzing Layering Artifact Jong-Uk Hou, Student Member, IEEE, Do-Gon Kim, and Heung-Kyu Lee Abstract— Because they will impact so many areas, copyright issues will inevitably arise as 3D printing expands into the content industry. The problem is that protections based on conventional methods are not effective, because the 3D printing process disables those protections. In this paper, we propose a robust and blind watermarking scheme that is able to protect content not only when the 3D model is shared in the digital world, but also when the 3D digital content is converted to analog content by 3D printing. First, we base our proposed watermark on a component that is unchanging to the printing direction for robustness against the printing process. The printing artifacts, instead of being regarded as severe distortion, are treated as a template that provides orientation information to the watermark detector. To achieve this, we also propose a blind estimation algorithm for the printing direction that starts from an analysis of the layering artifact. Using the results from a proposed estimator, the watermark from the printed-and-scanned model is synchronized with the original orientation. With the results of our tests with various 3D mesh models and attacks, we experimentally verified that the proposed method does not lose embedded patterns during the 3D print-scan process, especially with low-cost printers. Moreover, our method provides a new solution for estimating the printing direction that will be useful in a variety of fields. Index Terms— 3D model, 3D printing, robust watermarking. I. I NTRODUCTION L AYERED manufacturing, known also 3D printing, is a technology by which 3D physical models can be effi- ciently fabricated without the restrictions that are usually imposed by geometric complexity. It is already clear that 3D printing is going to impact many industries, including business and industrial equipment, automotive, medical, archi- tecture, food, and consumer-product industries. On the other hand, copyright issues will inevitably occur with the expansion of 3D printing into the content industry, as happened earlier in the music and video markets [1]. Even though many issues remain to be resolved (e.g., the strength of printed materials, printing accuracy), research on copyright protection is one of Manuscript received December 27, 2016; revised April 3, 2017 and June 5, 2017; accepted June 5, 2017. Date of publication June 21, 2017; date of current version August 1, 2017. This work was supported by the Samsung Research Funding Center of Samsung Electronics under Project SRFCIT1402-05. The work of J.-U. Hou was supported by a Global Ph.D. Fellowship Program through the National Research Foundation of Korea, Ministry of Education under Grant 2015H1A2A1030715. The associate editor coordinating the review of this manuscript and approving it for publication was Gwenaël Doërr. (Corresponding author: Heung-Kyu Lee.) J.-U. Hou and H.-K. Lee are with the School of Computing, KAIST, Daejeon 34141, South Korea (e-mail: [email protected]; [email protected]). D.-G. Kim is with Samsung Electronics, Seoul 16677, South Korea (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIFS.2017.2718482 the important issues because 3D object data is printed at the 3D printer, handled, and distributed in real-life. Among the technical difficulties we have to handle now, is the copyright protection of 3D model content. We need a strong, effective technology to avoid unauthorized copy, tampering, and illegal distribution, to provide protection of intellectual property. Content providers have attempted to pro- tect their 3D models by finger-printing, encryption and access- control-based digital right management (DRM) and digital watermarking technologies. The problem is that 3D content protection based on encryption and DRM are not effective because the 3D printing process disables those protections. The 3D object data obtained from 3D scanners also lacks security. As a result, 3D model content may not only be illegally copied over the Internet but also illegally copied and re-distributed in offline markets. This weak point of content protection is called the analog hole [2], and the main technology available to cover it is digital watermarking. For this reason, research to develop a digital watermark technology to protect 3D prints is needed to ensure the prosperity of the 3D content industry. Digital watermarking is the process of hiding digital infor- mation in a noise-tolerant signal such as multimedia data. The watermark can be used to determine authorship when a copyright dispute occurs, and can be used as a fingerprint to track a distribution path when a prototype in the hands of only a few people is leaked. Furthermore, digital watermarking could be utilized as an active component of an automatic system to regulate unauthorized users in a content sharing environment. For this purpose, the 3D model watermark should be covertly embedded into the 3D model content before distribution. In addition, the embedded watermark has to resist possible attempts to infringe the copyright. Another important aspect of this research issue is that we have to achieve a blind watermarking scheme in the true sense of the word. It is important that no prior information be required about the original content at the watermark detection stage, when we consider the technological practicality of the process. The blind watermarking scheme shows great practical advantages over semi-blind or non-blind schemes, because we do not need to know every corresponding key of the 3D printing object. The trade-off between payload and robustness is a well- known fact in digital watermarking [3]. In this research, we designed zero-bit watermarking to obtain the maximum robustness that a watermarking primitive can inherently offer. Zero-bit watermarking is used in some copyright protection schemes. For instance, some copyright protection platforms [4] use watermarks to offer an additional protection against the analog hole [2]. In addition, zero-bit watermarking can provide 1556-6013 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Transcript of 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on...

Page 1: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017

Blind 3D Mesh Watermarking for 3D PrintedModel by Analyzing Layering Artifact

Jong-Uk Hou, Student Member, IEEE, Do-Gon Kim, and Heung-Kyu Lee

Abstract— Because they will impact so many areas, copyrightissues will inevitably arise as 3D printing expands into thecontent industry. The problem is that protections based onconventional methods are not effective, because the 3D printingprocess disables those protections. In this paper, we propose arobust and blind watermarking scheme that is able to protectcontent not only when the 3D model is shared in the digitalworld, but also when the 3D digital content is converted toanalog content by 3D printing. First, we base our proposedwatermark on a component that is unchanging to the printingdirection for robustness against the printing process. The printingartifacts, instead of being regarded as severe distortion, aretreated as a template that provides orientation information tothe watermark detector. To achieve this, we also propose a blindestimation algorithm for the printing direction that starts froman analysis of the layering artifact. Using the results from aproposed estimator, the watermark from the printed-and-scannedmodel is synchronized with the original orientation. With theresults of our tests with various 3D mesh models and attacks,we experimentally verified that the proposed method does not loseembedded patterns during the 3D print-scan process, especiallywith low-cost printers. Moreover, our method provides a newsolution for estimating the printing direction that will be usefulin a variety of fields.

Index Terms— 3D model, 3D printing, robust watermarking.

I. INTRODUCTION

LAYERED manufacturing, known also 3D printing, is atechnology by which 3D physical models can be effi-

ciently fabricated without the restrictions that are usuallyimposed by geometric complexity. It is already clear that3D printing is going to impact many industries, includingbusiness and industrial equipment, automotive, medical, archi-tecture, food, and consumer-product industries. On the otherhand, copyright issues will inevitably occur with the expansionof 3D printing into the content industry, as happened earlierin the music and video markets [1]. Even though many issuesremain to be resolved (e.g., the strength of printed materials,printing accuracy), research on copyright protection is one of

Manuscript received December 27, 2016; revised April 3, 2017 andJune 5, 2017; accepted June 5, 2017. Date of publication June 21, 2017;date of current version August 1, 2017. This work was supported by theSamsung Research Funding Center of Samsung Electronics under ProjectSRFCIT1402-05. The work of J.-U. Hou was supported by a Global Ph.D.Fellowship Program through the National Research Foundation of Korea,Ministry of Education under Grant 2015H1A2A1030715. The associate editorcoordinating the review of this manuscript and approving it for publicationwas Gwenaël Doërr. (Corresponding author: Heung-Kyu Lee.)

J.-U. Hou and H.-K. Lee are with the School of Computing,KAIST, Daejeon 34141, South Korea (e-mail: [email protected];[email protected]).

D.-G. Kim is with Samsung Electronics, Seoul 16677, South Korea (e-mail:[email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIFS.2017.2718482

the important issues because 3D object data is printed at the3D printer, handled, and distributed in real-life.

Among the technical difficulties we have to handle now,is the copyright protection of 3D model content. We needa strong, effective technology to avoid unauthorized copy,tampering, and illegal distribution, to provide protection ofintellectual property. Content providers have attempted to pro-tect their 3D models by finger-printing, encryption and access-control-based digital right management (DRM) and digitalwatermarking technologies. The problem is that 3D contentprotection based on encryption and DRM are not effectivebecause the 3D printing process disables those protections. The3D object data obtained from 3D scanners also lacks security.As a result, 3D model content may not only be illegally copiedover the Internet but also illegally copied and re-distributed inoffline markets. This weak point of content protection is calledthe analog hole [2], and the main technology available to coverit is digital watermarking. For this reason, research to developa digital watermark technology to protect 3D prints is neededto ensure the prosperity of the 3D content industry.

Digital watermarking is the process of hiding digital infor-mation in a noise-tolerant signal such as multimedia data.The watermark can be used to determine authorship whena copyright dispute occurs, and can be used as a fingerprintto track a distribution path when a prototype in the hands ofonly a few people is leaked. Furthermore, digital watermarkingcould be utilized as an active component of an automaticsystem to regulate unauthorized users in a content sharingenvironment. For this purpose, the 3D model watermarkshould be covertly embedded into the 3D model content beforedistribution. In addition, the embedded watermark has to resistpossible attempts to infringe the copyright.

Another important aspect of this research issue is that wehave to achieve a blind watermarking scheme in the truesense of the word. It is important that no prior information berequired about the original content at the watermark detectionstage, when we consider the technological practicality of theprocess. The blind watermarking scheme shows great practicaladvantages over semi-blind or non-blind schemes, becausewe do not need to know every corresponding key of the3D printing object.

The trade-off between payload and robustness is a well-known fact in digital watermarking [3]. In this research,we designed zero-bit watermarking to obtain the maximumrobustness that a watermarking primitive can inherently offer.Zero-bit watermarking is used in some copyright protectionschemes. For instance, some copyright protection platforms [4]use watermarks to offer an additional protection against theanalog hole [2]. In addition, zero-bit watermarking can provide

1556-6013 © 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

HOU et al.: BLIND 3D MESH WATERMARKING FOR 3D PRINTED MODEL BY ANALYZING LAYERING ARTIFACT 2713

legal proof that a suspicious image is indeed a version of awork registered in an author society’s database [3], which canbe solved by detecting the presence of a watermark by anauthor society.

By this time, various non-blind and blind watermarkingmethods have been proposed for protecting 3D mesh mod-els [5]. However, all current 3D model watermarking tech-nologies have been applied to medical images, geographicinformation, and computer aided design (CAD) data. Mostof such work has concentrated on ordinary attacks, whichhappen during normal operation in the digital domain. As faras robustness is concerned, 3D printing objects bring newrequirements to 3D model watermarking because they aredistributed and handled both online and offline. Copyrightattacks in those research fields are relatively limited, and theoffline distribution environments of 3D content have rarelybeen considered.

A small number of 3D mesh watermarking techniqueshave been proposed to cope with the 3D printing scenario.Yamazaki et al. [6] presented a framework for watermarkdetection from 3D printed content. Instead of designing arobust watermarking domain, they used a mesh reconstructionprocess using the original mesh. In our previous work [7],we proposed robust watermarking resilient to a 3D print-scanprocess. To accomplish this, we analyzed the 3D print-scanprocess, and then constructed a domain which was minimallyaffected by these processes. However, both schemes are non-blind schemes that require prior information at the water-mark detection stage, such as a reference model. In addition,Macq et al. [8] presented an overview of the state of the artof 3D digital watermarking for 3D printing scenarios.

This paper reports 3D model watermarks robust againstchallenges that might occur during the 3D print-scan process.In our scheme, the printing artifacts, instead of being regardingas severe distortion, are treated as a template that provides theorientation information to the watermark detector. To achievethis, we also proposed a blind estimation algorithm for printingdirection that starts from analysis of the layering artifact.Using the result from a proposed estimator, the watermarkfrom the printed-and-scanned model is synchronized to theoriginal orientation. As a result of our tests with various3D mesh models and attacks, we verified experimentally thatthe proposed method does not lose embedded pattern duringthe 3D print-scan process.

In the rest of this paper, Section II presents an analysis ofthe 3D print-scan process and classifies possible distortions.Section III briefly presents the main idea of the proposedwatermarking scheme. In Section IV, we present the pro-posed printing-axis estimator, and in Section V the water-mark algorithm is presented. Section VI shows the simulationresults of the proposed watermarking method against variousattacks. Section VII and VIII offer discussion and conclusions,respectively.

II. DISTORTIONS DURING THE 3D PRINT-SCAN PROCESS

A. Signal Processing Attack

Various distortions can disturb watermark detection duringthe 3D print and scan process. To fabricate a 3D object using

Fig. 1. (a) Original Bunny model (34,835 vertices), and (b) 3D printedand scanned Bunny model (401,858 vertices). The distortions that appearedon (b) were as follows: 1) data loss of the obscured part, 2) local deformationdue to movement of the printer nozzle, 3) layering artifact, and 4) smalldistortion by the support structure.

a 3D printer, a 3D model designed by software, such ascomputer aided design (CAD), is converted to a stereolithog-raphy (STL) file, which has become a defacto standard. Afterthat, the cross-sections are combined and added together in alayer-by-layer sequence to form the physical part. In this step,the building process occurs as random noise in the buildingmaterials. The amplitude of the noise is influenced by thequality of the printer and the type of building material. Theinitial output may require an additional amount of cleaning upbefore the final model is ready for use. Therefore, coating andpolishing can be considered a signal processing attack, suchas smoothing.

A 3D scanner is a device that measures a physical objectto collect digital data, such as a STL file, on its shape. Multi-stripe laser triangulation (MLT) is a widely used method forthis purpose. It obtains 3D data by sensing laser light reflectedoff an object at several angles. During the 3D surface scanning,both sampling errors and environmental errors (e.g., due todust) may occur. After the scanning, it merges scanned datainto a single 3D model. During merging of multiple scan datainto a single dataset, geometric errors such as isolated andredundant vertices can appear. Due to duplicated and isolatedvertices, reconstruction and simplification as post-processingis necessary.

Fig. 1 shows the original Bunny model and the printed-and-scanned Bunny model. We used a Makerbot Replicator (layerheight δ = 0.3mm) for 3D printing and Maestro3D MDS400(resolution=0.07mm) for 3D scanning. Because attacks arecomplex, providing robustness against them is the minimumrequirement for protecting 3D print content.

B. DA-AD Conversion

During the building stage in a 3D printing, the digital3D data M = {V,E} is converted into an analog form.The 3D printed model in an analog form is modeled by thecontinuous function f (x, y, z). Therefore, the cover modelcompletely loses its vertices and connectivity information afterthe 3D printing process. Next, the surface information of theprinted model is digitalized by 3D scanner. Before digitalizingthe f (x, y, z), the printed model undergoes a transform that

Page 3: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

2714 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017

Fig. 2. (a) General parameters of 3D printing, (b) Printed model and itssupport structure, and (c) Bottom of the model after removal from the support.When 3D-printer objects are removed from their supports, traces of localdeformation remain on their bottoms.

includes rotation, scaling, and translation. The 3D scanningprocess includes various operations such as sampling, triangu-lation, and simplification. As we can see in Fig.1, the scannedmodel M

′ = {V′,E′} has similar geometric structure to M, buthas completely different vertices and connectivity information.

C. Layering Artifact

The layer thickness and printing direction are majorattributes that affect the accuracy of the printed object [9], [10].The addition of material in distinct layers inherently producesa layering artifact that is typical of layered manufacturing.As we can see in Fig.2(a), the surface retains the stair-stepshape as the process of stacking single layers proceeds. Theaccuracy of the printed model is highly dependent on theprinting direction and layer thickness [10], and the layerthickness can be thought of as a resolution of the layeredmanufacturing process. The attack during slicing can beregarded as z-axis resampling, which is a type of signalprocessing attack. As a result, the surface is built in a stairshape (See Fig. 1 and Fig.2(a)).

D. Cropping and Local Deformation

As shown in Fig.2(c), traces caused by the physical supportsof a printed model cause local deformation on the bottom.Moreover, any obscured part of the mesh may be lost duringthe 3D scanning process because a part laser or white lightcannot reach it; see Fig.1(b)-1). Providing robustness againstboth cropping and remeshing attacks is one of the mostchallenging issues for blind watermark detection.

III. MAIN IDEA OF THE PROPOSED SCHEME

Before designing a watermark primitive resilient to3D printing attack, we focused on layer slicing and the additivemanufacturing process as a watermark removal attack. Theseattacks commonly produce strong distortions in the z-axisdirection of the target model, called a layering artifact, whichincludes z-axis quantization and resampling (See Section II).Therefore, adopting a watermark primitive with an unchangingcomponent to the z-axis change is the natural choice for3D printing resilient watermarking. For this reason, we basedour proposed watermark on the radius ρ and azimuth angle ϕcomponents of a cylindrical coordinate system (ρ, ϕ, z) thatis invariant to the z-axis.

In this watermark primitive, if only the axis informationof the cylindrical coordinate system is secured, it is possibleto ensure robustness against the attacks related to the z-axiscomponent. However, the cover model completely loses itscoordinate and mesh information after the 3D printing process,so that the embedded watermark cannot be detected due to thedislocation of the initial coordinate. There is a way to restorethe coordinate system by referring to the original model [6],but we cannot use this method because our research aims forblind watermark detection.

In order to achieve both robustness and blindness of thewatermark against such unfavorable conditions, we proposea special form of watermarking system, described in Fig.3.In our scheme, the layering artifacts, instead of being regardedas severe distortion, are treated as a template that provides theorientation information to the watermark detector. To achievethis, we also propose a blind estimation algorithm for printingdirection that starts from analysis of the layering artifact.

We designed a spread spectrum watermarking based on thelocation information of the scattered vertices in a cylindricalcoordinate. Instead of changing the spectral coefficients, sig-nal � is synthesized as a spread spectrum signal, and � isdirectly embedded into the radius ρ of each vertex as follows:

ρ′ = ρ +�(ϕ), (1)

The signal � is generated by a function of the azimuth angle ϕof the input vertices. Details of the embedding process canbe found in Section V-A2. The proposed algorithm providesseveral advantages for solving the following challenges.

A. Robust Against Layer SlicingThe layer slicing artifact, called the layering artifact, is a

new type of distortion in the beginning of 3D printing that hasnever been considered as an attack on 3D mesh watermarking.As introduced in Section II, the layer slicing process involvesa sampling and resampling process along the z-axis, which isregarded as a distortion in a z-axis related component of thecover model. In our proposed scheme, the watermark signal �is defined by a function of the azimuth angle ϕ = arctan( y

x )

and is added to the radial distance ρ = √x2 + y2 of the

cylindrical coordinate. In other words, the watermark signal �is not affected by the change in the z-axis direction. Therefore,the signal � is invariant to the changes along the z-axis, so thatthe embedded signal � has robustness against the layer slicingprocess.

B. Blind Watermark SynchronizationRecovering the orientation after cropping and remeshing

attacks is one of the open problems for the 3D mesh water-marking [5]. We recover the original Euler angles (θ, ψ, ϕ)of the printed-and-scanned model by analyzing the layeringartifact. As explained in Section II, the surface of the printedmodel retains a stair-step shape along the printing-axis as theprocess of stacking the single layers proceeds. By analyzingthe layering effect using the proposed estimator, the printing-axis is estimated in order to recover the synchronize Eulerangles θ and ψ . In the detection step, the embedded watermarkhas invariance to the Euler angle ϕ because the embedded

Page 4: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

HOU et al.: BLIND 3D MESH WATERMARKING FOR 3D PRINTED MODEL BY ANALYZING LAYERING ARTIFACT 2715

Fig. 3. Main idea of the proposed watermarking scheme: For the robustness to the layering process, we based our proposed watermark on a componentthat is unchanging to the printing direction. To achieve this, a watermark pattern is embedded along to the pre-defined axis, and then the model is printedin the same direction. To detect the embedded pattern in a blind and robust way, the orientation of the 3D scanned model is recovered using the proposedprinting-axis estimator that starts from an analysis of the layering artifact.

Fig. 4. Description of the projection system for our scheme.

signal is interpreted by the magnitude of the discrete Fouriertransform.

C. Robust Against Connectivity and Remeshing Attacks

The synthesis signal� has sinusoidal structure on the vertexradius around a continuous coordinate ϕ. Thus, watermarkembedding is not affected by the spatial regularity of thevertices. Moreover, with uniform resampling and interpolationprocesses in the detection step, � is almost unaffected by theuniformity of the vertex sampling. Connectivity informationis not used for watermark embedding.

IV. PRINTING-AXIS ESTIMATOR

In this section, we focus on blind and effective parameterestimation of layer-manufactured models. Using surface pro-jection, filtering and frequency analysis, the proposed schemeestimates the printing direction of the printed model.

A. Basic Idea

To analyze the layering artifact, we define a projectionsystem with a rotating space as shown in Fig.4. The directionof reduced projection is y, and the space is rotated on thez-axis. In our projection system, any plane P ⊂ R

3 can beshown as a line after a specific degree of rotation. To use thisproperty of our system, we prove this assertion as follows.

Proof : Let P ⊂ R3 be a plane as a layer and v ∈ R

3

be a normal vector of P as a printing direction. Let T bea transformation from Cartesian coordinate to a sphericalcoordinate for v = (x, y, z). Then T (v) = vs = (r, θ, ϕ)which satisfies,

⎧⎪⎨

⎪⎩

r = √x2 + y2 + z2,

θ = arctan (y/x),

ϕ = arccos (z/r),

(2)

Fig. 5. Layering artifact with random rotation can be analyzed as a form ofperiodic lines using our projection system.

where 0 ≤ θ ≤ 2π, 0 ≤ ϕ ≤ π . The inverse tangent arctanmust be suitably defined, taking into account the correct quad-rant of (x, y). The inverse transform T −1(vs) = v = (x, y, z)is defined as follows:

⎧⎪⎨

⎪⎩

x = r cos θ sin ϕ,

y = r sin θ sin ϕ,

z = r cosϕ.

(3)

Let v ′s = (r, 0, ϕ) be a vector that vs rotated by −θ on the

z-axis. P ′ is a plane that P rotated by −θ on the z-axis. Then,using the inverse transform T −1, v ′ = (x ′, 0, z) is calculatedas a normal vector of plane P ′. Because normal vector v ′ ison the xz-plane, the projection of P ′ on the xz-plane is a line.

Therefore, our projection system makes it possible to ana-lyze the layering artifact as a form of periodic lines around thedirection of printing (See Fig.5). Therefore, based on analysisof the layering artifact, we can design an estimator for thelayer thickness and printing direction. In addition, dimensionalreduction by the projection greatly reduces computational timeof the proposed estimator.

B. Layering Artifact Analyzer

Using the projection system described in Section IV-A,we designed an algorithm to estimate the layering artifact.First, the digitalized 3D-surface data is obtained with a3D scanner to analyze the surface of the layer-manufacturedmodel. As a pre-processing step, the center of the scanneddata M is placed on the center of the mass. Rotation alignmentis not needed because it is one of the estimation goals.

The process is divided into three steps as follows: 1) rotationand projection, 2) amplification of the artifact, and 3) fre-quency analysis. Below, each step is described in detail.

Page 5: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

2716 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017

Fig. 6. Conceptual illustration of the thickness calculation.

1) Rotation and Projection: The scanned data M of thelayer manufactured model is rotated around the z-axis with θ◦in the range of [0, 180), and a rotated model Mθ is createdas shown in Fig.7(c). Then, Mθ is projected to the xz-plane,and a thickness map Pθ is obtained. To measure the thicknessof Mθ , the depth map of back-faces is subtracted from thedepth map of the front-faces. The depth map P f of the front-faces is calculated with z-buffer algorithm [11]. The z-axisof Mθ is negated, and back-faces are rendered to obtain thedepth map Pb. Then, Pb is subtracted from P f to obtain thethickness map Pθ (See Fig.6).

2) Amplifying the Artifact: To emphasize the presence ofthe periodic properties in the layering artifacts, we appliedseveral filtering techniques motivated by [12] and [13]. First,a high-pass filter is applied to Pθ to reveal the trace of thelayering artifacts. We note that similar results can be achievedby second derivative methods or a laplace operator.

In the filtered Pθ , we found that the histogram thresh-olding [14] can be effective to remove some meaninglesscomponents. In our implementation, two opposite operationswere used: histogram stretching and histogram thresholding.The histogram thresholding is defined by in the followingequation:

P′θ (x, y) =

⎧⎪⎨

⎪⎩

0.5 − τ if Pθ (x, y) < 0.5 − τ,

0.5 + τ if Pθ (x, y) > 0.5 + τ,

Pθ (x, y) otherwise,

(4)

where τ denotes a threshold,1 and (x, y) denote a value fromthe x-th row and y-th column. Histogram stretching, histogramthresholding are applied to Pθ sequentially, and an artifact-amplified P′

θ is obtained.Fig.7 presents an example of the input model M and the

intermediate results for each filtering step. It is hard to find theperiodicity in Fig.7(b) and (c). In contrast, a strong periodicityaround the printing direction appears in Fig.7(d).

3) Frequency Analysis: If the input scanned data are alayer-manufactured model, the corresponding artifact containsa specific strong periodicity. As explained in section IV-A,the layering artifact can be presented in the form of periodiclines which can be analyzed with the 2D discrete Fouriertransform.

First, the Hanning window is applied to the P′θ to prevent

spectral leakage [14]. Then, 2D fast Fourier transform (FFT)is carried out, and the magnitudes are computed as follows:

Fθ (u, v) = H (|F(P′θ(x, y))|), (5)

where u and v are the frequency variables for x, y, andF(·) denotes the 2D FFT. H (·) is the Butterworth high-pass

1Value of 0.1 was used for the threshold parameter τ .

Fig. 7. Sample of 3D model and intermediate results for a fixed valueof rotation, θ : (a) 3D-printed Venus model, (b) 3D-surface illustraction ofscanned model M

θ, (c) thickness map P

θ, and (d) final result of the filtering

process.

filter which removes significant low-frequency components.Then, Fθ (u, v) is transformed to the polar-coordinates Fθ (r, ϕ)which satisfies the following:

{r = √

u2 + v2,

ϕ = arctan(v/u),(6)

where arctan denotes the inverse tangent taking into accountthe correct quadrant of (u, v), and r and ϕ are the radial andangular coordinates.

Then, we applied a threshold-based peak detector reportedin [13] to search for candidate local maximums (peaksn times greater than a local average magnitude), and a setof locations Pθ0 for the candidate peaks is obtained. Then,peak θ was chosen with the largest magnitude which satisfies,

⎧⎨

pos(θ ) = (r0, ϕ0) = argmax(r,ϕ)∈Pθ

0

(Fθ (r, ϕ)

),

mag(θ ) = Fθ (r0, ϕ0),

(7)

where θ is in the range of [0, 180), and Pθ0 denotes a set oflocations for the candidate peaks. The pos and mag representthe position and the magnitude value of the peak θ . Theθ represents a maximum peak location from the the rotatedmodel Mθ , and the magnitude of θ indicates the magnitudeof the layering artifact.

C. Printing-Axis Estimator

Using the layering artifact algorithm described inSection IV-B, we designed an algorithm to estimate the direc-tion of the layering artifact. Fig.8(a) shows a sample plot of

Page 6: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

HOU et al.: BLIND 3D MESH WATERMARKING FOR 3D PRINTED MODEL BY ANALYZING LAYERING ARTIFACT 2717

Fig. 8. (a) Plot of values mag(θ ) for the rotation value θ , and (b) plot ofFθ(r, ϕ) with the estimated parameter θ .

Fθ (r, ϕ) for the estimated parameter θ . The periodic energiesconverge to a local point, we can estimate the global maximumpoint of Fθ (r, ϕ) by solving the optimization problem asfollows:

θ = argmaxθ

(mag(θ )

). (8)

The θ is calculated using the layering artifact analyzerpresented in Section IV-B. The θ indicates the value ofthe rotation degree where Pθ represents the most distinctperiodicity. Moreover, ϕ from pos(θ ) indicates the value ofthe rotation degree for the periodic lines in Pθ . Therefore,the unit vector us is estimated as a relative printing directionin a spherical coordinate using a simple assignment:

us = (1, θ , ϕ). (9)

However, there is one more thing to consider here. In thespecial case with us = (r, θ, 0), the proposed projectionsystem would fail to find the printing direction since theθ rotation does not give any variation of mag(θ ). In orderto eliminate this drawback, a simple two-pass algorithm wasproposed as follows. First, the printing-axis is estimated usingthe proposed scheme. Next, rotate the input model with a smallrandom degree ϕ to remove the case us = (r, θ, 0), and thenwe perform the estimator one more time. If the estimationresults are different, we can choose the estimation result witha higher variance of mag(θ ).

The estimated printing direction provides the relativedegrees for the rotation alignment of the scanned data. Thus,the result will be used for watermark synchronization in theproposed watermark detection.

V. PROPOSED ALGORITHM

A. Watermark Embedding Algorithm

Our scheme embeds watermark information into a 3D meshmodel, and detects the embedded information from 3D printedobjects. During the printing process, the z-axis of the water-marked model should be aligned with the printing direction ofthe 3D printer to achieve blind detection. Fig.9 illustrates theoverall process of the proposed scheme. To achieve imper-ceptibility, a visual masking function was designed for ourscheme.

1) Preprocessing and Axis Alignment: Let 3D polygonalmesh M = {V,E} as an input cover model. First, we deter-mined the z-axis for the watermark embedding, and weadjusted the orientation of the mesh M as follows:

vi = R(θ, ϕ) · vTi , (10)

Fig. 9. Overall process of the watermark embedding algorithm.

Fig. 10. Sample of the (a) Vertices of the original Crank model and itssurface, (b) Resampled Crank model, and (c) Center aligned Crank model.The cover model was sliced into 30 layers, and the center position of the eachlayer aligned to the z-axis. Vertices of each model were presented by smalldots.

where vi ∈ V, and R(θ, ϕ) is the rotation matrix defined as,

R(θ, ϕ) =⎡

⎣cos θ cosϕ − sin θ cosϕ sin ϕ

sin θ cos θ 0− cos θ sin ϕ sin θ sin ϕ cosϕ

⎦. (11)

where θ, ϕ are relative degrees of the pre-defined print-ing direction (See Section IV). The z-axis should be deter-mined as the orthogonal direction of the 3D printing layer.In other words, a content provider who wants to distributethe 3D printed model should print the watermarked modelalong the z-axis. In addition, the vertices of some models haveextremely biased distribution, as shown in Fig. 10(b). In thiscase, we conducted uniform remeshing on the input model,not only to remove irregularities of the vertex density but alsoto provide robustness against remeshing attacks produced bythe 3D scanning.

After the axis-alignment step, the vertices vi = (xi , yi , zi )were classified into L distinct bins with equal range of theirz value. The maximum and minimum values of z are calculatedin advance, and denoted by zmax , zmin , respectively. Thel-th layer Ll is calculated by:

Ll = {vi |zmin + δz · l < zi < zmin + δz · (l + 1)}, (12)

where 1 ≤ l ≤ L, 1 ≤ i ≤ V , 1 ≤ j ≤ Vl , and Vl denotes thenumber of vertices belonging to the Ll . Instead of vi ∈ Ll ,we use a notation v(l, j ) that denotes the j -th vertex of thel-th bin. The thickness of the bin δz is defined by δz =zmax −zmin

L , where the number of the bin L is a user-selectedconstant number.

Next, the center position cl={1,2,...,L} = (xl , yl) for each ofthe L is calculated. We used the volume moment based centercoordinate [15] because simple center calculation is not robust

Page 7: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

2718 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017

against connectivity attacks.

cl = (xl, yl) =(

Ml100

Ml000

,Ml

010

Ml000

)

, (13)

where Mlpqr denotes the p, q, r -th order volume moment of

the l-th layer Ll . The calculation of the volume moment canbe found in [16]. Using cl , the positions of vertices v(l, j ) ∈ Ll

are normalized as follows.{

x ′(l, j ) = x(l, j ) − xl ,

y ′(l, j ) = y(l, j ) − yl .

(14)

Fig. 10 shows an sample of the transformed model as the formof the center aligned layers. Each point cloud v(l, j ) ∈ Ll isregarded as a sliced layer of the cover model M. The covermodel is sliced into L layers, and the center position of theeach layer is aligned with the z-axis; see Fig.10(c).

2) Watermark Embedding: First, we transformed the ver-tices vi = (xi , yi , zi ) in Cartesian coordinate to a cylindricalcoordinate (ρi , ϕi , zi ) which satisfied,

⎧⎨

ρi =√

x2i + y2

i ,

ϕi = arctanyi

xi,

(15)

where 0 ≤ ϕ ≤ 2π . The inverse tangent arctan must besuitably defined, taking into account the correct quadrantof (x, y).

Next, a binary watermark pattern w which has length lwis generated using a watermark key and a pseudo-randomgenerator. The watermarking algorithm was presented with thespecific version of � described in Section III. The watermarkembedding was formulated as the following equation:

ρ′i = ρi + α(vi ) ·�(ϕi), (16)

where 1 ≤ i ≤ V , and α(·) is a function for the visual maskingdescribed in Section V-A4. A spread-spectrum signal � issynthesized as a form of sinusoidal signal in frequency band[ fs + 1, fs + lw] as in Eq.(17):

�(ϕi ) =lw∑

m=1

wm · sin(2πϕi (m + fs)+ φi,l ), (17)

where fs is a value for minimum frequency band, andφi,l is a phase parameter for imperceptibility, as described inSection V-A4. Then, we obtain V′ = {v ′

i }i=1,2,...,V as a set ofthe watermarked vertices, where v ′

i = (ρ′i , ϕi , zi ).

3) Model Reconstruction: The spherical coordinate of thewatermark vertices v ′

i ∈ V′ is transformed to the Cartesiancoordinate (x ′

i , y ′i , z′

i ) that satisfies,

x ′i = ρ′

i cosϕi , y ′i = ρ′

i sin ϕi , z′i = zi . (18)

Next, the v ′i are classified into L distinct bins as the same

way as in Eq.(12). The x and y position of the vertices arerecovered as follows:

{x(l, j ) = x ′

(l, j ) + xl ,

y(l, j ) = y ′(l, j ) + yl ,

(19)

where xl and yl are the enter position calculated in Eq. (13).Through this process, we finally obtain the watermarked model

M′ = {V′,E}, where E is a set of the connectivity information

of the original model.4) Improving Imperceptibility: To improve imperceptibility

of the embedded watermark, we developed three approachesas follows: 1) visual masking based on surface roughness, 2)randomized phase of the � , and 3) smoothing along to z-axis.Using the cover model M and the vertices vi ∈ V, the water-mark strength function α(vi ) was calculated as follows:

α(vi ) = s · γi · βi (20)

where s, γi , and βi are the parameters for strength factor,roughness measure, and smoothing parameter, respectively.Detailed descriptions for each approach follow.

a) Visual masking based on surface roughness: The‘roughness’ term in the visual masking differs from that oflayered artifact. The concept of visual masking [17], [18] isthat a rough region of a cover model can hide some geometricdistortions provided that their frequencies are quite similar.To measure the roughness measure denoted as R(vi ), we usethe method proposed by Lavoué et al. [19], which is basedon the curvature analysis on local windows of the mesh. Thestrength factor γi was adapted by the mean roughness valuefor each local region i as γi = Ri −Rmin

Rmean−Rmin, where Ri is the

roughness of region i , and Rmean and Rmin denote the meanand minimum roughness over the whole mesh.

The visual masking process effectively hides the watermarkartifacts in high masking regions. Fig.11(c) shows the dis-tortions due to the watermark embedding. A smooth regionis associated with weak watermark strength while a rougherregion is significantly distorted.

b) Randomized phase of the �: The pattern embeddingaround the azimuth angle induces a vertical stripe pattern onthe surface; see Fig.11(a). We can remove the stripe pattern byrandomly varying the φi,l in Eq.(17) for each layer Ll . Notethat, randomized phase component of � does not affect theperformance of the detection because the embedded signal hasinvariance to the angle ϕ.

c) Smoothing along the z-axis: Even after improv-ing the watermark invisibility through the randomizedphase (Section V.D.2)), the crooked-lines are created at theboundary of the divided layers, which leads to visual dis-turbance. To alleviate the artifact, the z value of � issmoothed by the βi parameter as a bell shape function(1/

√2π)e−(hi/δz−0.5)2 , where δz is the layer height defined

in Section V-A1. Here, hi is the relative z-position of vi

calculated by mod(zi , δz), where mod is a modular operation.Fig.11 shows the watermarked model without the visual

masking process and with visual masking. Both models werewatermarked with a high strength factor (s =10e-4) to showthe visual shape of the watermark. The geometric distor-tion was measured using the mesh structural distortion mea-sure (MSDM2) [20] with the standard configuration (numberof scales: 3, symmetric). The embedded watermark appearsas a stripe pattern strongly represented on certain areas ofthe model’s surface. Compared to Fig.11(a), the embeddedwatermark in Fig.11(b) achieved a significant level of imper-ceptibility.

Page 8: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

HOU et al.: BLIND 3D MESH WATERMARKING FOR 3D PRINTED MODEL BY ANALYZING LAYERING ARTIFACT 2719

Fig. 11. Sample of the watermarked models with high strength factor(s = 10e-4) (a) without visual masking (MSDM2=0.522), and (b) with visualmasking (MSDM2=0.355), and (c) distortion map of (b).

Fig. 12. Overall process of the proposed watermark detection scheme.

B. Watermark Detection

To detect the embedded watermark, surface information ofthe physical 3D content is first digitized using 3D scanner.In order to synchronize the embedded watermark, the ori-entation of the scanned data is determined by the printing-axis estimator. Next, we resampled the scanned data intoa regular grid to capture the geometric information. Then,watermark information was detected from the resampled datausing 2D discrete Fourier transform (DFT). Fig.12 illustratesthe overall process of the proposed scheme. We now describeeach step in detail.

1) Watermark Synchronization With z-Axis Recovery: Let3D polygonal mesh M = {V,E} as the 3D scanned dataof the watermarked model. First, we determined the z-axisof M for watermark synchronization using the printing-axisestimator. The unit vector us (as a relative printing direction)was estimated in a spherical coordinate using a assignment:

us = (1, θ , ϕ). (21)

where θ and ϕ are the estimation results described insection IV. To synchronize the watermark pattern embeddedin M and the detector, we adjusted the orientation of themesh M. The input vertices vi = (xi , yi , zi ) from V wererotated using matrix multiplication:

vi = R(−θ ,−ϕ) · vTi (22)

where R(θ, ϕ) is the rotation matrix defined in Eq.(11).Through the process, the z-axis of M is aligned to the orthog-onal direction of the 3D printing layer. Because the watermarkpattern was embedded along the z-axis of M, the alignmentprocess can be regarded as watermark synchronization.

We should note that our research focused on the protectionof the watermarked-and-printed model. In traditional digitalwatermarking without a 3D print-scan process, the printingaxis estimator would fail to find the mesh orientation initially

used in watermark embedding. To solve this problem, we havetwo options as follows. A simple solution is to extract thewatermark from both the synced and non-synced model andthen use a bigger correlation to determine the watermarkdetection. Another option is to use a magnitude value of theprinting-axis estimator. When the printing-axis is detected,a peak value mag(θ ), which is generally more than 102 timeslarger than the average value of the same frequency band,signposts to determine whether or not the model was printed.When the model is not 3D printed, the synchronization stepis skipped.

2) Coordinate Transform and Interpolation: First, M wastransformed into the form of the center aligned layersusing the method introduced in Section V-A1. Then, vi inCartesian coordinates are transformed to cylindrical coordi-nates (ρi , ϕi , zi ) using the Eq.(15), where 0 ≤ ϕi ≤ 2π .

Next, we conducted uniform resampling on the input modelnot only to provide robustness against various attacks producedby the 3D scanning but also to remove irregularities of thevertex density. Let G(z, ϕ) be a regular grid with Gz×Gϕ size.Then, we define scattered points, G(zi , ϕi ) = ρi , i = 1, ..., Vusing the value from vi = (ρi , ϕi , zi ) ∈ V. Using the scattereddata interpolation algorithm [21],2 G(z, ϕ) is fitted from thescattered data G(zi , ϕi ).

3) Watermark Detection: We detected the embedded patternbased on the correlation between the spectral domain ofG(z, ϕ) and the reference pattern generated by the watermarkkey. Then, the embedded data was decoded based on thelocation of the correlation peaks. The detailed description ofeach process is as follows.

First, the Hanning window is applied to the interpo-lated grid G(z, ϕ) to prevent spectral leakage [14]. Then,one-dimensional FFT is applied to the ϕ-axis, and the magni-tudes are computed as follows:

M(z, ξϕ) = H (|F(G(z, ϕ))|), (23)

where 1 ≤ ξϕ ≤ Gϕ , 1 ≤ z ≤ Gz , and ξϕ is the frequencyvariable. F(·) denotes the FFT function. H (·) is the Gaussianhigh-pass filter with σ = fs/2 which removes significant low-frequency components.

To remove the random noise signal, the magnitudes of thez-axis are averaged as follows:

M′(ξϕ) = G−1z

Gz∑

z=1

M(z, ξϕ), (24)

Next, sub-vectors M∗ were selected from M′ as follows:

M∗(m) = M′(m + fs), (25)

where 1 ≤ m ≤ lw, fs and lw are the start frequency andthe length of the embedded watermark, respectively. Then,we calculate a detector response using normalized correlationas follows:

corr(w,M∗) = (w − w) · (M∗ − M∗)

‖w − w‖ · ‖M∗ − M∗‖ , (26)

2We use Delaunary triangulation based cubic interpolation implemented inMatlab2012b

Page 9: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

2720 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017

Fig. 13. The detector responses with 1000 reference watermarks. The modelwas watermarked with the 500-th mark on the x-axis, where the response isvery high.

where the bar above a symbol denotes the mean value,and w is the reference pattern with lw length generatedusing the watermark key and a pseudo-random generator(see Section V-A2).

To analyze the detected watermarks, detector responsesfrom the watermarked model and the original model werecompared by a statistical analysis used in [6] and [22]. Theanalysis was based on the random-watermark false posi-tive model introduced in [23]. First, the watermark detectorresponse r is defined as the correlation value calculatedin Eq.(26). The responses rw and r0 were computed fromthe watermarked model using the inserted pattern w and arandom pattern w0 as the false watermark which is orthogonalto w, respectively. Then, the detector responses are turn into aprobabilistic answer using a standard statistical analysis. Thefalse positive probability P f p that the r0 would be as high asthe rw was computed using Student’s t-test [24].

To determine whether or not the suspicious model iswatermarked, we use a τw as the threshold to minimize afalse-positive rate (e.g., Pf p < 10−5) that was determinedby experiment without any attacks. Sample test results of arandom-watermark false positive test are displayed in Fig.13.Otherwise, the probability of false negatives Pf n , where thewatermark is embedded but the detector fails to identify it,cannot in general be analytically computed. Unlike the caseof a false positive, there are many more variables to considerbefore analyzing Pf n , because it depends on both the water-mark detector and the embedder and the specific attack thatwas applied [23].

VI. EXPERIMENTAL RESULTS

In this section, we demonstrate experimental results toshow that the proposed watermarking scheme is robust againstthe conventional attacks suggested by Wang et al. [25]. Theprinting-axis estimation and watermark detection tests for3D printed model were conducted as well. We used the threeprinting machines shown in Table I. Makerbot Replicator,and OTS Deltabot, which utilize fused deposition model-ing (FDM), has 0.1-0.3mm resolution in the height direc-tion. Objet Eden250 creates 3D objects based on the Polyjettechnique with 16-30μm layer thickness, on average. Eachprint model was scanned using a Maestro3D MDS400, multi-stripe laser triangulation (MLT)-based 3D scanner, whichhas 0.07 mm spatial resolution. Fig.14 shows physical sam-ples of the print models. For measuring the watermarking

TABLE I

DESCRIPTION OF 3D PRINTERS IN OUR EXPERIMENT

TABLE II

DETAIL DESCRIPTION OF PRINTED MODELS IN OUR EXPERIMENT, AND

EXPERIMENTAL RESULTS OF THE PRINTING-AXIS ESTIMATOR

Fig. 14. Samples of the printed models using the printers in Table I.(a) MR-Dragon. (b) OD - Venus. (c) OE - Bunny.

performance, we used a desktop computer with Intel(R)i7-3770 CPU and 16 GB main memory. The algorithm wasimplemented in OpenGL and MATLAB 2012b using the meshtoolbox [26].

A. Printing-Axis Estimation Results

For the experiments, the ten models described in Table IIwere printed using the three machines shown in Table I.In order to test the estimation scheme only, each model wasprinted and scanned without watermarking process.

Table II shows the estimation results and their accu-racy. Estimation error rate of vs is calculated as follows:||vs − vs ||/180 where || · || is L2 norm. Examining the results,the proposed scheme successfully estimated vs from all modelsprinted using the FDM-based machine. In contrast, the estima-tion results from the Eden250 were inaccurate because ObjetEden250 has such high resolution (16-30 μm) that it washard to capture the layering artifact using our 3D scanner.By sampling theorem [27], it was determined that a high-resolution (8-15 μm) 3D scanner is required to capture thehigher frequency signal needed to estimate the parameters of

Page 10: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

HOU et al.: BLIND 3D MESH WATERMARKING FOR 3D PRINTED MODEL BY ANALYZING LAYERING ARTIFACT 2721

Fig. 15. The watermarked models, and the distortion map of the watermarkedVenus models: (a) watermarked model (MSDM2=0.168), (b) strongly water-marked model (MSDM2=0.486), (c) distortion map of (b).

the Eden250. Moreover, the layering artifact could be modifiedby the shape of the model and the printing environment. Forexample, it might be crumpled on the vertical edge structurewhich is hard to fabricate. Certain types of surface distortionscould be caused by the support structure [28], which mightdistort the estimation results. In addition, there are severalprocesses that could contaminate the frequency componentsuch as interpolation [13] in the thickness calculation, andthe sampling process during 3D scanning.

B. Watermark Robustness Against Online Attacks

In this section, we demonstrate that the proposed water-marking scheme is robust against the conventional attackssuggested by Wang et al. [25]. The start frequency fs = 50was employed for the frequency band of the spread-spectrumsignal. In our implementation, we used a binary pattern wwhich satisfies lw = 128 and

∑lwi=1 wi = 10 as a watermark

pattern. The number of the bin L was set to 25. The geometricdistortion was measured using a mesh structural distortionmeasure (MSDM2) [20] with the standard configuration (num-ber of scales: 3, symmetric). Fig.15 illustrates the watermarkedVenus models, and its distortion map. The distortion maprepresents the HSV maps of the geometric objective distortionsbetween the original and the watermarked meshes based onthe MSDM2.

To test the robustness of the digital contents watermark,we conducted several experiments with signal processing andgeometry attacks, including: noise addition, simplification,smoothing, quantization, subdivision, and cropping. All attackswere performed using the benchmark program provided byWang et al. [25]. To compare our method against the perfor-mance of a conventional scheme, we used the watermarkingmethod of Cho et al. [29] which modifies the statistical fea-tures of the distribution of the vertex norm. In this experiment,we used the first method of Cho et al. which employed themean as a statistical feature. The method in [29] (64bit pay-load) was modified to 0-bit methods based on their correla-tion value between the extracted and the original message.The watermark embedding strength of the experiments wasselected to yield MSDM2 of 0.30.

Experimental results are demonstrated in Table III. As aresult, the proposed method presented a modest level ofrobustness except for simplification attacks, rotation, and largeamount of cropping. When the strength of the watermark isminimal, a small amount of error occurred in the cases of

TABLE III

MEDIAN VALUE OF THE DETECTOR RESPONSES AND FALSEPOSITIVE RATE OF THE WATERMARK DETECTION TESTS WITH

VARIOUS ATTACKS SUGGESTED BY WANG et al. [25]

quantization and smoothing. It is interesting that our methodshows good results in subdivision attack becuase the resam-pling in the detection process reduce effect of the attack. Giventhat there exist very few blind watermarking schemes resilientto cropping attack [5], [30], [31], the noteworthy point is thatour algorithm achieved robustness against a small amount ofcropping attack. Embedded watermark can survived local lossof the surface, because the frequency domain was exploitedfor the watermarking. In addition, axis recovery and uniformre-sampling provided cropping resilience in the watermarkdetection step, because watermark desynchronization was themain reason for the cropping vulnerability.

We should note that the proposed scheme cannot provideperfect results under no attack. The main reason is the uniformremeshing process in the detection process. In the detectionprocess, a remeshing process provides a consistent watermark-ing domain during the 3D print-and-scan process, and it helpsa lot in the watermark detection in a 3D printing scenario.However, the remeshing process in the detection stage is alsoregarded as an attack that distorts the original watermarkingdomain.

C. Robustness Against 3D Print-Scan

In this section, we provide various robustness tests ina 3D printing environment. To conduct the experiments,the watermarked model was changed to an analog modelusing a 3D printer, and the printed model was digitized intoa polygonal mesh using the 3D scanner. The initial versionof the scanned data had lots of artifacts on the bottomside, so that we manually post-processed the scanned datato remove noticeable artifacts such as those related to thesupport structure and peripheral information. For the test,

Page 11: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

2722 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017

Fig. 16. The watermarked and 3D printed models: (a) original Bunny,(b) watermarked Bunny model (MSDM2=0.262), (c) strongly water-marked Bunny model (MSDM2=0.450), (d) original Venus, (e) water-marked Venus model (MSDM2=0.168), and (f) strongly watermarked Venusmodel (MSDM2=0.486).

we used four types of models: watermarked Bunny and Venusmodels with MSDM2 under 0.30, and watermarked Bunnyand Venus models with MSDM2 under 0.50. Fig.16 showsphotos of the watermarked and 3D printed models using theMakerbot Replicator. Average height of the printed modelswas around 6cm. As we described in SectionV-A4, a smoothregion is associated with weak watermark strength while arougher region is significantly distorted.

1) Test With Various Models: For this test, we printedthese models using the Makerbot Replicator, and the averageheight of the printed models was around 6 cm. The water-mark embedding strength of the experiments was selectedto yield MSDM2 under 0.50. Table IV presents the resultsof the robustness test with various models and the printingparameters. The proposed method succeeded in detecting theembedded watermark without error from three test cases. How-ever, the performance of the proposed method depends on thesurface characteristics of the cover model. Our method mainlyrelies on the performance of the uniform resampling process,by which it is hard to reconstruct a complex surface. Therefore,the detection accuracy decreases when the cover model haslots of complex surfaces, such as occur in a thin, twistedstructure (e.g., the Dragon models). In this case, we adaptivelychanged the watermark strength, and the printing direction forthe target model.

2) Test With Various Printing Methods: Table V presents theresults of watermark detection after the 3D print-scan process.The proposed method succeeded in detecting the embeddedwatermark without error in two test cases. Our method demon-strated acceptable results in the cases using the FDM basedmachine. Among the results, the Venus-2 model demonstratedthe best detection performance. Despite that the watermark

TABLE IV

THE WATERMARK DETECTION RESULTS WITH VARIOUSMODELS AND PRINTING PARAMETERS

TABLE V

THE WATERMARK DETECTION RESULTS WITH VARIOUS 3D PRINTERS

signal was damaged by 3D print-scan attack, the Venus-1,Venus-2, and Bunny-2 model showed acceptable performance.The result from the Bunny-2 model needs improvement, so theinvisibility of the watermark should be reduced according tothe circumstances of the desired application. As we discussedin Section VI-A, the printed model using the Objet Eden250,a polyjet based high-resolution printing machine, was barelyestimated in the printing-direction. The performance of thewatermark detection highly depends on the printing-axis esti-mator, so that the results from the Eden250 might be unstable.For this reason, blind watermark detection from FDM printsis much easier than the test with the Eden250, althoughFDM based printing has relatively low resolution.

3) Test With Various Post-Processing: During the 3D print-scan process, an embedded watermark can disturb the processby not only distortions from 3D printing and scanningbut also by distortions from various post-processing actions(See Section II). For this section, we conducted robustnesstests against the following post-processing attacks. For the test,we used a watermarked Venus with MSDM2 under 0.50 andprinted it using the Makerbot Replicator.

First, we coated the surface of the model with enamel paintto remove the roughness due to the layering artifact. We alsoused sandpaper to make the surface smooth. Table VI presentsthe results of printing-axis estimator and watermark detection.The results showed that we failed to detect the printing-axisand the embedded watermarking in both smoothing cases.In other words, the smoothing operation on the physical modelwould be a strong removal attack for the proposed scheme.In addition, robustness tests with smaller and larger printingsizes showed interesting results: Accuracy of the printing-axis estimator was not affected by the printing size, but thedetection result was decreased by downscaling.

Lastly, we conducted an experiment with a ‘double print-scan’ attack. For the test, the watermarked and print-scannedmodel was printed using the same printer with the sameparameters. Before re-printing the scanned model, the model

Page 12: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

HOU et al.: BLIND 3D MESH WATERMARKING FOR 3D PRINTED MODEL BY ANALYZING LAYERING ARTIFACT 2723

TABLE VI

THE WATERMARK DETECTION RESULTSWITH VARIOUS POST-PROCESSING

TABLE VII

A SUMMARY OF RELATED ROBUST MESH WATERMARKING

ALGORITHMS AND THEIR ROBUSTNESS TEST RESULTS (F f p )

went through the online smoothing attack using the sameoperation in Table III (Nitr = 5) because the smoothingcan be done by attackers to improve the visual quality or toremove the watermark. As shown in Table VI, the embeddedwatermark was detected after the ‘double print-scan’ attackwithout smoothing. It is interesting that our method showsgood results in some cases. However, the printing-axis ofthe second printing should match an initial printing-axis todetect the watermark. Moreover, we should note that thewatermark strength in this test was stronger than what wouldbe used in a real scenario. In the tests using the model withMSDM2 < 0.3, we failed to extract the embedded watermarkin most cases.

4) Comparison Test: To compare our method against theperformance of a conventional scheme, we used the water-marking methods of Cho et al. [29], Hou et al. [7], andYamazaki et al. [6]. The methods in [7] and [29] were modifiedto 0-bit methods based on their correlation value between theextracted and the original message. We conducted a compari-son experiment with [6] on the corresponding simplified modelof the original mesh and compared the results with the resultsfrom the dense mesh because the method is only able to workwith a simple mesh such that the number of the vertices isaround 3000.

Table VII summarizes the performance of the related robustwatermarking algorithms and our methods. The proposedmethod successfully detected the embedded data with accept-able false positive probabilities. Meanwhile, the method ofHou et al. showed good result from only one test case, eventhough it was a non-blind watermarking method. The methodof Yamazaki et al. also produced poor results because it didnot have any features to secure the robustness against 3D print-scan process that includes layer slicing attack. FDM-basedprinting has relatively low printing resolution, it is hard to pre-serve the embedded pattern using the methods of [6] and [7].We note that the methods of [6] and [7] can be more effective

when the model is printed using more high quality printers.Furthermore, the method of Cho et al. did not detect theembedded watermark in all cases. As explained in [5] and [29]did not work against cropping attacks because the range of thehistogram could not be synchronized after cropping. The print-scan process affected not only the geometric distortion but alsovarious complex attacks such as cropping and remeshing. Forthis reason, the conventional watermark scheme, which doesnot consider the print-scan process, has difficulty operatingduring the print-scan process.

VII. DISCUSSION

A. 3D Printing Technologies and the Proposed Method

Today’s 3D printers are concentrated at two ends of aspectrum: high cost-high capability and low cost-low capabil-ity [32]. High-end printers are generally targeted at enterprisesand 3D printing service bureaus; in constrast, low-end printersare targeted at consumers and hobbyists.

Based on our experimental results, we expect that ourmethod can estimate the parameters of low-cost 3D printingmethods such as laminated object manufacturing (LOM) andfused deposition modeling (FDM). In other words, the pro-posed method can cover the most dominant 3D printingsolutions of the present day, such as RepRap, Prusa, MakerBot,Ultimaker [33]. According to an exclusive report [34] thatexamines the low-cost 3D printing market, the low-cost3D printing market will remain a business with one of thelargest upside segments in the overall 3D printing industry.

However, the presented layering artifacts may not exist or bedetectable in all 3D printers. The most popular high-endprinters, generally based on the SLS and Polyjet [35], providevery high layer resolution so it was hard to capture the layeringartifact using a general scanner. Due to the resolution gapbetween common 3D scanners and high-resolution printingtechniques, which affects the estimation performance of thealgorithm, reliable parameter estimation for such techniquesis still an open problem despite the results of this study.

Ultimately, 3D printing and scanning technology will pro-vide very accurate, almost perfect, copies of the offline3D model. In a situation when higher DA-AD conversionaccuracy is ensured, the degree of freedom in the design ofwatermarking systems is expected to increase. The proposedsynchronization cannot work effectively in this environment asa matter of course. Instead of our method, the future researchwill be done with template based watermark technology forsynchronization or watermark primitive with rotation andcropping invariance [29], [30], [36].

B. Limitations

Our research focused on the protection of the watermarked-and-printed model. For maximizing the robustness againstthe harsh environment, some limitations appeared due to thestrict assumptions of our scheme. First of all, our scheme isfragile to rotation in the digital domain, so that the contentprovider must align the model z-axis and the printing-axis.It is true that the assumption restricts the usage scenario of theproposed method. However, the printing-direction is generally

Page 13: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

2724 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 12, NO. 11, NOVEMBER 2017

determined to maximize fabrication stability and to minimizedistortions due to the support structure. Thus, once a 3D modelis aligned to the stable direction and then watermarked, thereis no need to re-align the model in the distribution scenario.

In addition, the core of the algorithm may seem to havea conflicting structure. First, the printing-axis estimator needsstrong printing artifacts to work correctly. On the other hand,the watermark extraction may be fragile to strong printingartifacts, although the watermark pattern has a certain level ofrobustness to the manufacturing process. Moreover, in the caseof online watermarking scenario, the proposed watermarkingis weak against rotation because the printing-axis estimatorcannot find the original mesh orientation. To overcome suchproblems, the proposed method can be transformed to a semi-blind watermarking with the transmission of side informationfor synchronization. Allowing for the limited availability of thesemi-blind algorithm, the watermarking algorithm can be freeto the vulnerability of the rotation, so that the printing-axisestimator part can be removed.

The performance of the proposed method depends onthe surface characteristics of the cover model. To achievea watermark signal that is invariant to changes along thez-axis, we designed the proposed watermarking on the basisof the radius and azimuth (ρ, ϕ) component of the cylindricalcoordinate. Thus, our approach works well with a 3D modelwith a cylindrical structure, but not with sharp edge and piece-wise planar portion of CAD models. Moreover, our methodused a uniform remeshing before watermarking to minimizevariation of the surface irregularity between the original andscanned model. Consequentially, our method mainly relies onthe performance of the uniform remeshing, by which it ishard to reconstruct a complex surface. Therefore, the detectionaccuracy decreased when the cover model had many complexsurfaces, such as occur in a thin, twisted structure.

Detection performance of the spread-spectrum watermark-ing is affected by the watermark strength, length of thebandwidth, and strength of the noise from the transfer channel.First, the bandwidth used for the watermarking is determinedby the maximum output resolution of both the printer canoutput and the scanner can scan. Therefore, we experimentallydetermined a watermark bandwidth of about 50 to 200 whichis a relatively narrow bandwidth compared to the stable water-mark length. In addition, the intensity of the noise generatedin the printing process and material is also strong, which limitsthe imperceptibility of the watermark. Thus, the proposedapproach cannot guarantee perfect invisibility under currentprinting technology, even though the current 3D printingtechnology creates models with relatively high accuracy andresolution. other than

VIII. CONCLUSION

As far as robustness is concerned, 3D printing bring newchallenges to 3D model watermarking because printed objectsare distributed and handled both online and offline. Coordinateand file information are totally lost during the 3D printingprocess and the existing watermarking methods cannot bereadily applied in this context. Moreover, the print-and-scanprocess imposes a combined distortion of subsequent attacks

on the watermark: layering artifacts, cropping, remeshing, andso on. All current 3D model watermarking technologies haveconcentrated on ordinary attacks in the digital domain, buthave overlooked digital-to-analog-to-digital conversion.

We propose a new blind watermarking system that achievesboth robustness and blindness of the watermark against threat-ened environments. For robustness to the layering process,we based our proposal on a component that is unchanging tothe printing direction. The printed-and-scanned model is syn-chronized to the original orientation using the analysis resultof the printing artifacts treated as a template that provides theorientation information to the watermark detector. The water-marking scheme with 1) watermark primitive perpendicularto the printing-axis, and 2) watermark synchronization usingthe printing-axis estimator helps to obtain both blindness androbustness in a 3D printing environment.

The results of our tests confirmed experimentally that theproposed method does not lose the embedded pattern after3D print-scan, especially with low-cost printers. However,the results also showed that the current system usage andperformance are limited. First of all, imperceptibility androbustness are not enough to a real-life deployable system.Moreover, our approach does not work well with 3D modelsfeaturing complex surfaces, sharp edges, and piecewise planarportions such as CAD models. Remeshing stage is necessary atdetection for a consistent watermarking domain but it viewedas an attack that impairs detection performance in general.In addition, the content provider must align the model z-axisand the printing direction to benefit from the printing-axisestimator.

To the best our knowledge, this is the first 3D watermarkingsystem robust to the print-and-scan process. Still, as mentionedearlier, there are many issues to be solved and there are someavenues for follow-up research that will make the methodcloser to real-life deployment. First, the remeshing processat detection causes many limitations. This part thereforeneeds to be improved or removed by an improved detectionalgorithm. If a printing noise attenuator can be designed usingthe printing-axis estimator, the robustness and invisibility ofwatermarking could be improved. In addition, the perfor-mances could be improved by using a watermark primitiveperpendicular to the printing-axis other than the cylindricalcoordinate used in this paper. For instance, rotation invari-ant features such as spherical harmonics [37], vertex his-togram [29], and shape distribution [38] could be interestingsolutions for solving this problem to investigate the drasticdistortions from print-and-scan process.

REFERENCES

[1] R. Stern, “Napster: A walking copyright infringement?” IEEE Micro,vol. 20, no. 6, pp. 4–5, Nov. 2000.

[2] E. Diehl and T. Furon, “Watermark: Closing the analog hole,” in Proc.IEEE Int. Conf. Consum. Electron. (ICCE), Jun. 2003, pp. 52–53.

[3] T. Furon, “A constructive and unifying framework for zero-bit water-marking,” IEEE Trans. Inf. Forensics Security, vol. 2, no. 2, pp. 149–163,Jun. 2007.

[4] A. M. Eskicioglu, D. E. Virag, D. J. Duffield, M. S. Deiss, andB. W. Beyers, Jr., “Global copy protection system for digital homenetworks,” U.S. Patent 8 332 657 B1, Dec. 11, 2012.

Page 14: 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND …hklee.kaist.ac.kr/publications/IEEE Tran on Information... · 2017-08-30 · 2712 IEEE TRANSACTIONS ON INFORMATION FORENSICS

HOU et al.: BLIND 3D MESH WATERMARKING FOR 3D PRINTED MODEL BY ANALYZING LAYERING ARTIFACT 2725

[5] K. Wang, G. Lavoue, F. Denis, and A. Baskurt, “A comprehensive surveyon three-dimensional mesh watermarking,” IEEE Trans. Multimedia,vol. 10, no. 8, pp. 1513–1527, Dec. 2008.

[6] S. Yamazaki, S. Kagami, and M. Mochimaru, “Extracting watermarkfrom 3D prints,” in Proc. 22nd Int. Conf. Pattern Recognit., Aug. 2014,pp. 4576–4581.

[7] J.-U. Hou, D.-G. Kim, S. Choi, and H.-K. Lee, “3D print-scan resilientwatermarking using a histogram-based circular shift coding structure,”in Proc. 3rd ACM Workshop Inf. Hiding Multimedia Secur., New York,NY, USA, 2015, pp. 115–121.

[8] B. Macq, P. R. Alface, and M. Montanola, “Applicability of watermark-ing for intellectual property rights protection in a 3D printing scenario,”in Proc. 20th Int. Conf. 3D Web Technol., New York, NY, USA, 2015,pp. 89–95.

[9] P. Kulkarni and D. Dutta, “An accurate slicing procedure for layeredmanufacturing,” Comput. Aided Des., vol. 28, no. 9, pp. 683–697, 1996.

[10] D. Ahn, H. Kim, and S. Lee, “Fabrication direction optimization tominimize post-machining in layered manufacturing,” Int. J. Mach. ToolsManuf., vol. 47, nos. 3–4, pp. 593–606, 2007.

[11] J. F. Hughes, A. van Dam, J. D. Foley, and S. K. Feiner, ComputerGraphics: Principles and Practice. Upper Saddle River, NJ, USA:Pearson Education, 2013.

[12] S. Prasad and K. R. Ramakrishnan, “On resampling detection andits application to detect image tampering,” in Proc. IEEE Int. Conf.Multimedia Expo, Jul. 2006, pp. 1325–1328.

[13] B. Mahdian and S. Saic, “Blind authentication using periodic propertiesof interpolation,” IEEE Trans. Inf. Forensics Security, vol. 3, no. 3,pp. 529–538, Sep. 2008.

[14] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Process-ing Using MATLAB. Upper Saddle River, NJ, USA: Pearson Education,2004.

[15] K. Wang, G. Lavoué, F. Denis, and A. Baskurt, “Robust and blind meshwatermarking based on volume moments,” Comput. Graph., vol. 35,no. 1, pp. 1–19, Feb. 2011.

[16] C. Zhang and T. Chen, “Efficient feature extraction for 2D/3D objectsin mesh representation,” in Proc. Int. Conf. Image Process., vol. 3.Oct. 2001, pp. 935–938.

[17] L. D. Harmon and B. Julesz, “Masking in visual recognition: Effectsof two-dimensional filtered noise,” Science, vol. 180, no. 4091,pp. 1194–1197, 1973.

[18] C. I. Podilchuk and W. Zeng, “Image-adaptive watermarking using visualmodels,” IEEE J. Sel. Areas Commun., vol. 16, no. 4, pp. 525–539,May 1998.

[19] G. Lavoué, “A local roughness measure for 3D meshes and its appli-cation to visual masking,” ACM Trans. Appl. Perception, vol. 5, no. 4,p. 21, 2009.

[20] G. Lavoué, “A multiscale metric for 3D mesh visual quality assessment,”Comput. Graph. Forum, vol. 30, no. 5, pp. 1427–1437, 2011.

[21] K. Anjyo, J. P. Lewis, and F. Pighin, “Scattered data interpolation forcomputer graphics,” in Proc. ACM SIGGRAPH Courses, 2014, p. 27.

[22] E. Praun, H. Hoppe, and A. Finkelstein, “Robust mesh watermarking,”in Proc. 26th Annu. Conf. Comput. Graph. Interact. Techn., New York,NY, USA, 1999, pp. 49–56.

[23] I. Cox, M. Miller, J. Bloom, J. Fridrich, and T. Kalker, DigitalWatermarking and Steganography, 2nd ed. San Francisco, CA, USA:Morgan Kaufmann, 2008.

[24] E. Kreyszig, Introductory Mathematical Statistics: Principles and Meth-ods. New York, NY, USA: Wiley, 1970.

[25] K. Wang, G. Lavoué, F. Denis, A. Baskurt, and X. He, “A bench-mark for 3D mesh watermarking,” in Proc. IEEE Shape Modeling Int.Conf. (SMI), Jun. 2010, pp. 231–235.

[26] G. Peyre, “The numerical tours of signal processing part 2: Multiscaleprocessings,” IEEE Comput. Sci. Eng., vol. 13, no. 5, pp. 68–71,Sep./Oct. 2011.

[27] A. V. Oppenheim et al., Discrete-Time Signal Processing, vol. 2.Englewood Cliffs, NJ, USA: Prentice-Hall, 1989.

[28] S. S. Crump, J. W. Comb, W. R. Priedeman, Jr., andR. L. Zinniel, “Process of support removal for fused depositionmodeling,” U.S. Patent 5 503 785 A, Apr. 2, 1996.

[29] J.-W. Cho, R. Prost, and H.-Y. Jung, “An oblivious watermarking for3-D polygonal meshes using distribution of vertex norms,” IEEE Trans.Signal Process., vol. 55, no. 1, pp. 142–155, Jan. 2007.

[30] P. R. Alface, B. Macq, and F. Cayre, “Blind and robust watermarking of3D models: How to withstand the cropping attack?” in Proc. IEEE Int.Conf. Image Process. (ICIP), vol. 5. Sep./Oct. 2007, pp. V-465–V-468.

[31] X. Rolland-Nevière, G. Doërr, and P. Alliez, “Anti-cropping blind resyn-chronization for 3D watermarking,” in Proc. IEEE Int. Conf. Acoust.Speech Signal Process. (ICASSP), Apr. 2015, pp. 1702–1706.

[32] A. Earls and V. Baya. (2016). The Road Ahead for 3D Printers. [Online].Available: http://www.pwc.com/us/

[33] (2016). 3D Printing Industry Trends Q3-2016. [Online]. Available:https://www.3dhubs.com/trends

[34] SmarTech Markets Publishing LLC. (2016). Opportunities in Low-Cost3D Printers: Technologies Materials and Markets. [Online]. Available:http://www.researchandmarkets.com/research/tt84dr/opportunities_in

[35] I. Gibson, D. Rosen, and B. Stucker, Additive Manufacturing Technolo-gies, 2nd ed. New York, NY, USA: Springer-Verlag, 2015.

[36] O. Benedens, “Affine invariant watermarks for 3D polygonal andNURBS based models,” in Proc. Int. Workshop Inf. Secur., 2000,pp. 15–29.

[37] M. Kazhdan, T. Funkhouser, and S. Rusinkiewicz, “Rotation invariantspherical harmonic representation of 3D shape descriptors,” in Proc.Symp. Geometry Process., vol. 6. 2003, pp. 156–164.

[38] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, “Matching3D models with shape distributions,” in Proc. IEEE Int. Conf. ShapeModelling Appl. (SMI), May 2001, pp. 154–166.

Jong-Uk Hou received the B.S. degree in informa-tion and computer engineering from Ajou University,South Korea, in 2012, and the M.S. degree in Webscience and technology from KAIST, South Korea,in 2014, where he is currently pursuing the Ph.D.degree with the Multimedia Computing Laboratory,School of Computing. His major interests includevarious aspects of information hiding, computervision, machine learning, and multimedia signalprocessing. He received a Global Ph.D. Fellowshipfrom the National Research Foundation of Koreain 2015.

Do-Gon Kim received the B.S. degree in com-puter science and engineering from KyungPookNational University, South Korea, in 2014, andthe M.S. degree from the School of Computing,Korea Advanced Institute of Science and Tech-nology, South Korea, in 2016. He is currently aResearcher with Samsung Electronics. His majorinterests include various aspects of information hid-ing, multimedia signal processing, and convolutionalneural networks.

Heung-Kyu Lee received the B.S. degree in elec-tronics engineering from Seoul National Univer-sity, Seoul, South Korea, in 1978, and the M.S.and Ph.D. degrees in computer science from theKorea Advanced Institute of Science and Technol-ogy, South Korea, in 1981 and 1984, respectively.Since 1986, he has been a Professor with theSchool of Computing, KAIST. He has authored/coauthored over 200 international journal and con-ference papers. His major interests are digital water-marking, digital fingerprinting, and digital rights

management. He has been a reviewer of many international journals, includ-ing the Journal of Electronic Imaging, Real-Time Imaging, and the IEEETRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY.