ios - CALayer vs CGContext, which is a better design approach? -


I'm experimenting with iOS drawing. To practice practical, I wrote a barchart component in the following class diagram (OK, I was not allowed to upload images) Let me write it in words I have an NGBarChartView that is inherited from UIView 2 Protocol NGBarChartViewDataSource and NGBarChartViewDelegate And the code is

To attract the bar chart, I have made the chat every time as a separate caspeller because of which I double it twice, first I can just create a UIBezierPath and one I can enclose the Caspelier object and two, I can easily track if the bar is touched by itam or the component works very well by using the [Layer hitTest] method, however, let me come to twelve chart I am not comfortable with the approach I have taken to be charmed. So this note requires expert opinion on the following me

  1. I'm not really using UIGraphics content by using caspler and by creating baritties, is this a good design?
  2. Many of my viewers CALayers inside a UIView. Depending on the performance, there is a limit on the basis of the number of CalAir, which you can make in the UIVIVI.
  3. If a good option is to use CGCTEXTE *, then what is the correct way to identify a particular path
  4. from the animation point off, like the eyelid when you see it on If you tap, the layout is better or the CGCTEtex design is better.

    Help is very much appreciated by BTW, you are free to see your code and comment. I gladly accept any suggestions to improve.

    Best, Fife

    IMO, generally, any type of drawing Shapes require heavy processing power. And blending the cached bitmap with the GP is again much cheaper than dragging all. So in many cases, we cache all of the images in a bitmap, and the charge of CALayer in iOS.

    However, if your bitmaps are exceeding the video memory limit, quartz can not mix all the layers at once. Consequently, quartz must draw a single frame at more than one stage. And this requires some texture to be reloaded in the GPU. I can not believe it because the iPhone VRAM is known to integrate with system RAM. Anyway it is still true that even more work is required on that matter. If system memory becomes insufficient, then the system can refine existing bitmap and redraw them later.

    1. Caspelier all CGContext code> (I believe you mean it) works instead of you If you feel the need for a lower level of adaptation, then you can do it yourself.

    2. Yes, everything is limited in terms of performance, if you are using hundreds of layers of large alpha-mixed graphics, then this performance will be the cause of the problem. Anyway, usually, this does not happen because the layer structure is accelerated by the GPU if your graph line is not very high, and they are basically opaque, you will be fine.

    3. Everyone will know that once the graphics drawings are composite, there is no way to break them back. Since the structure is a type of optimization by lossy compression, therefore you only have two options (1) changes are necessary when reconstructing all the graphics. Or (2) create a cached bitmap of each display element (like the graph line) and the overall composite of your needs. This is only what CALayers are doing.

    4. Absolutely layer-based approach is far better than any kind of free drawing (even this is done within the GPU too) by the simple GPU bitmap Of course, unless your layers do not exceed the video memory limit,

      I hope to have more processing power than the structure (which will become two texture triangle). It helps.

Comments

Popular posts from this blog

mysql - BLOB/TEXT column 'value' used in key specification without a key length -

c# - Using Vici cool Storage with monodroid -

python - referencing a variable in another function? -