Honda Insight Forum banner

101 - 120 of 215 Posts

·
Premium Member
2001 5S "Turbo"
Joined
·
10,681 Posts
"Balanced" or "equal"?
(Med time)
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #102 (Edited)
That shows the importance of cell level work which is even more time consuming and tricky than stick level!! I def need to build my semi automated stick tester. It's been discussed lots of times. I even have a schematic somewhere...
Indeed. Yet, I was also thinking that most of the stick voltages are so similar that a value about 0.1V lower could be a tip-off... Problem is, I don't think the typical pack has such similar voltages, plus, it's really only obvious when you see all the cell voltages, plus, if anything, we usually only look at tap voltages. So, yeah, really hard to get anywhere just looking at aggregates.

I too start thinking about some sort of automated cell-level tester when doing this stuff. Yet I'm also always thinking 'this is the last time I mess with NiMH'... I'm really having a hard time 'this year', as I have all these sticks with great cells - except for these seemingly aberrant single cell HSD ones. I'm still mulling over whether it's worth building a cell zapper, such as using one or more of the MDM caps. I've been leaning away from that, I've been trying to peel myself away from the NiMH all together and get to building an LTO pack. But, it's hard to just say "No" to Insight NiMH...

....You may have touched on the reason folks have such poor success with rebuilding efforts. I have done a lot of cell level qualification based of heavy test currents and high indicated internal resistance(IR), but even that did not insure good packs. Looks like I may have overlooked the importance of self discharge(SD). One has to be impressed with how uniformly "good" cells are in their SD rates.
Yeah, I think single high self discharge cells is probably one of the main reasons for failed rebuilds. Unless you look at the cell level, it'd be hard to spot. I didn't start looking at cell level with respect to self discharge until late in the game (a few years ago), and even then I still gave it short shrift. I think the main reason we didn't pay enough attention is because we didn't fully appreciate that voltage alone can indicate HSD; you don't need to discharge the cells, but you do need to look at the voltages...

Of the three major parameters, IR, self discharge, and capacity what is your ranking of defect likelihood?
I haven't worked with enough packs to really know. I think HSD is probably the kiss of death for most packs. For high IR and capacity issues the car seems to be able to work with those, where failure will only be gradual, or where you can actually alter your driving behavior or putz with the OEM management to make things work. But with high self discharge - there's a threshold beyond which you'll never be able to use the pack.

I think I'm starting to grasp how these failure modes work or don't work with the OEM management. For example, with a really fast self discharge cell, you can have a totally depleted cell in a matter of days. If you left the pack after a pos recal, say 75% charge state, and let the car sit for a few days, you could come back and have an imbalance of just about 75%. You'll get a neg recal, the car will try to charge the pack and it will allow more charging than usual, but if your other cells are at 75% charge state, you only have a max 25% headroom. In effect, your working, usable capacity will always be under 25%, and probably quite a bit less. And then, you only have ~15% wiggle room until you get a P1449-78 code, when the car can't charge the pack more than 10% (and I've actually seen this triggered at 20%). In other words, if most cells are at 75% and one is empty, the car will struggle to charge the empty cell more than 10% while keeping the other cells from overcharging.

So basically, HSD is what I think must push packs over the edge. IR and capacity are usually at least partially intermingled. In the first instance a deteriorated pack will have low capacity and high IR -- low capacity due to degraded active material and low capacity because high IR makes voltage sag and triggers end of discharge prematurely (and also triggers premature end of charge, due to voltage spike). But a lot of this is reversible via discharge, deep/ultra-deep discharge, etc. After that, though, you might be left with cells that seem damaged in some more material way, that still exhibit high IR. I just can't say how common that is.

But I don't think even these high IR cells will cause a rebuild to fail. I think it's probably pretty cut and dried, that it depends on just how bad the IR is, and also on how you use the car. High IR cells won't charge as much as the others, and I think the amount is probably at least in part a strictly 'ohmic' calculation: if you have a 2 cell stick with one cell at 1.40V during charge and one at 1.45V during charge, where the voltage difference is due to higher IR in the 1.45V cell, then the 1.45V cell will charge less at a given current; I think it'd be simply ~3.6% less, or perhaps you could say it would require 3.6% more current to charge the same amount as the 1.40V cell. So, for every 100mAh of charging you 'lose' 3.6mAh in the 1.45V cell. That's a relatively tiny amount. I don't think that would add up or compound in any debilitating way over time and usage, vis-a-vis the mild balancing that you can get under OEM management. But, I don't know, I'm not too sure about this stuff, it gets confusing and complicated.

To answer your question, I think most high IR and capacity issues are the more common, perennial issue in our packs and have to do with 'crud'. (Ultra-)Deep discharging and grid charging can get rid of a lot of the crud and can lower IR and increase capacity. So, I think I'd rank these as number 1. HSD cells probably 2 but a more serious failure. Aberrant high IR cells 3... I could be wrong though. Also, perhaps at the core of all issues, it seems like my middle cells are more often the failures, so I'm thinking that differential heating and cooling could be the or a initial source of failure, the thing that kicks it all off. Middle cells get hotter and stay hotter longer than stick end cells, all else being equal. Their position in the pack would also make a difference...

Also, overall, any degradation is competing with that mild balancing I mention above. Off the top of my head, I'm thinking OEM management might be able to cope with something like 5-10% imbalance under some generic driving circumstances, the typical person's usage... I think it's possible or likely that a 'typical drive' can restore at least a few percentage-points to any cell-to-cell imbalance.

I'm a bit confused by the term "equilibrium voltage." How do you define this term?
As I recall it's based on thermodynamic calculations of the active materials of the cell - the potential difference (i.e. voltage) you calculate based on negative electrode stuff and positive electrode stuff. This 1.318V value though is something I saw in I think that 'Civic' battery research paper that's been around IC for ages*... What's fascinating is just how close a lot of my cell voltage readings are to this value... "Equilibrium" conditions in this context is what the battery folks call anything other than charging or discharging, it's similar to "resting voltage," only a resting voltage that'd be measured under ideal circumstances, such as a new cell, half charged, 25C temp, etc. I think it's also an antonym of "transient" conditions, where charging and discharging are considered transient...

*Page 32 of 'Civic Battery Paper.pdf' among docs in the downloadable manual bundle. Given the half reaction potential at the positive electrode, of +0.49V, and the half reaction potential at the neg electrode, of -0.823V, the total reaction is 1.318V (though combining these I only get 1.313V, must be some rounding or something). Here's a snip:

83653
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #103
I took a closer look at the sticks with 'red-highlighted' cells, the high self discharge cells. I discharged those cells individually to see what capacity they had left given their relatively lower voltages. Although it's clear that all the red cells have high self discharge, given the remaining capacity they wouldn't all cause problems in a rebuild, at least in the near term. If self discharge didn't get worse, a couple of them would probably be OK. All the non-HSD cells discharged almost exactly the same amount, between 4600mAh and 4700mAh, so they probably self discharged very very little, remaining at about 71% SoC.

Here are the data, starting with the stick with the lowest voltage HSD cell:
S50: cell 2 at 1.225V vs. other cells averaging 1.324V, -0815mAh, about -115mAh SD/day
S37: cell 5 at 1.249V vs. other cells averaging 1.324V, -1494mAh, about -096mAh SD/day
S41: cells 3 & 5 at 1.268V vs. other cells averaging 1.326V, -2408,2343mAh, about -70mAh SD/day
S41: cell 1 at 1.319V discharged -4735 mAh
S45: cell 5 at 1.282V vs. other cells averaging 1.324V, -3233mAh, about -041mAh SD/day

So, a couple thoughts: You can see that S45 cell 5, at 1.282V, still held quite a decent amount of charge, losing on average only 41 mAh per day. If this cell didn't deteriorate in terms of SD, it'd probably work.

S41 cell 1, a "questionable" cell at 1.319V, put out as much as the good cells...

The middling value HSD cells, 1.249V and 1.268V, lost about 96 mAh/day and 70 mAh/day, respectively. After a bit over a month sitting, they held about 32% to 51% of the good cells. My guess is that these would eventually cause problems in a normally driven/used car, though one could probably be vigilant, take steps to help ensure that they're more likely to stay charged.

S50 cell 5 at 1.225V, losing -115 mAh/day, holding only about 18% after a month sitting? Same thing, it'd cause problems without 'vigilance'.

....All the sticks were more or less conditioned, balanced, and left at about 73% charge state 33 days ago. Most of the cells still should be at around 70% charge state. The red-ish highlighted cells are ones with high self discharge (HSD) - they likely have very little charge left in them. Most of the sticks with these cells won't work, they'll cause any pack they're in to fail. Stick 50 for sure, probably stick 37, and 45 and 41 will cause problems eventually. The yellow highlighted cells are questionable. Sticks 71 to 74 at the bottom come from a different pack, which is why voltages are slightly different. All the sticks are 2007 Civics, though.
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #104
Here's a link to a post that goes into the battery management implications of some things I saw working with another fast self discharge cell/stick: https://www.insightcentral.net/threads/oem-pack-management-efficiency-not-so-good.117785/#post-1450808
I posted a graphic there illustrating how the deep discharge that a fast self discharge cell will experience actually ends up preventing voltage depression - in that particular cell - while it would end up inducing voltage depression in the other cells. I guess I'll just post the graphic here as well. If you're interested in more info you can go over to that thread.

83680
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #105
I've talked quite a bit about self discharge in this thread lately, and also in a couple related threads. And, in general, I've talked about 'battery management' in related ways in one or two other threads. Since I've put the most about self discharge in this thread I want to put the following musings here as well, though it has more to do with 'battery management' than 'NiMH voltage' per se...

I started talking about self discharge some posts back, linked here: https://www.insightcentral.net/threads/the-quintessential-insight-nimh-voltage-thread.89298/page-5#post-1450644

I was thinking about self discharge, battery management, and reconditioning last night, and vaguely realized that it wouldn't be a very difficult 'calculation' to figure out just how much self discharge is or would be too much. I think there's only a handful of things to think about, to understand. Here's a back of the envelope sketch:

-In general, we have a max usable capacity of about 4000 mAh. Self discharge is a matter of X many mAh per time span, such as days, for example 100mAh per day.
-Driving, particularly certain kinds of driving and use of the IMA, produces some balancing. We can quantify that by saying we get X mAh of balancing 'per drive', or 'per day' or something like that.

-Add one difficult nuance: 120 cells at potentially different charge states with potentially different self discharge rates. I don't think we need to deal with this yet for the sake of just painting the broad strokes, but it matters...

I think that's it. The idea is simply this: Your self discharge rate has to be slower than the net balancing effect/rate you get from however much and the type of driving you do, in the context of a total usable capacity of about 4000mAh.

So, some posts up I think my worst self discharge cell was something like 115mAh per day. Assuming the pack were fully car-charged (i.e. about 4000mAh usable capacity) and balanced at start, that cell would be totally empty in 4000mAh/115mAh/day = 35 days. Obviously I'd need to drive it more than once a month.

The question is, 'How much would I need to use the car to continue using this pack without having that self discharge cell mess things up?'

This hinges on a couple things, mainly, How much balancing do/can I get from top-end pack use?

I don't have solid data for this, but I think I can make some worthwhile guesstimates. The amount of self discharge 'off the top', that is, after a true 100% full charge, is something like a few hundred milliamp-hours. If you charge a cell to full and discharge shortly after the charge you get an ~6000mAh discharge; if you charge a cell to full and discharge the next day, you get something like 5700 mAh, i.e. 300mAh self discharge off the top. Something similar plays a role in the top-end balancing that should happen in the car.

Now, I'm gonna say that this off-the-top self discharge can play out over the top ~1600mAh of capacity. So if you have a 6500mAh cell you're gonna get top-end SD only over the top 25% of the charge state range, i.e. anything above 75%. This means that if you charge the cell above 75% you'll get some 'top-end' self discharge. If you charge to 100% you'd get the full 300mAh; if you charge to 75% you get nothing. This means that the type of driving, the type of IMA usage that needs to happen to get balancing has to happen above 75% SoC...

So let's assume we get to charge to 85% SoC, or 10 points above 75%. This means we can get 10%/25% X 300mAh = 120mAh of top-end SD balancing. Though cell voltage differences play a role as well, where you get some balancing during the charge, where higher voltage cells will charge slower, I think it's a relatively small amount, so I'll ignore it and assume the only balancing we get is the top-end SD variety...

In my example case I have a 115mAh SD/day cell, and I'm saying I can get 120mAh top-end SD balancing per... day: On day 1 I drive the car, the cells are balanced, and they're all charged to 75%. On day 2 this high SD cell has lost 115 mAh. IF on day 2 I drive and charge the pack to car-full, calling this 85% SoC, I should get a balancing effect that restores this cell to the level of others. On day 3 the cell will again be -115mAh...

So, I'd have to drive every day and charge to full every day in order for top-end SD balancing to overcome a 115 mAh/day self discharge rate.

Although some of these values are guesstimates, I think they're probably ballpark. You should be able to extrapolate from this example to generate scenarios for different driving regimes and different self discharge rates. My -115mAh SD/day cell seems borderline. Having to drive every day and charging to full every day is an extreme usage scenario: People don't usually do these things on a normal, day-to-day basis. If I were driving normally, this cell would gradually self discharge and drop lower and lower relative to other cells, and cause IMA problems.
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #106 (Edited)
For a while now I've been trying to get a sense of how well or whether voltages can be used as a proxy for capacity balance or imbalance. I think the general assumption is that balanced voltages, such as at cell or tap level, should approximate balanced capacity, i.e. if the taps or cells are balanced the voltages should be similar. To some extent this is certainly true, but after doing a grid charge, driving and draining the pack, logging some tap voltages, I can more clearly see at least one pretty solid case where it doesn't hold true very well.

I've generally known that voltages can vary more after a full charge, such as cell voltages for a stick or tap voltages for a pack. But I've never really had a good sense of the amount of variation that's 'acceptable' and how that variation may map to capacity differences. Basically, one question I've had is this: Good, balanced, 'optimized' cells will have very even voltages (like within a few millivolts), but do uneven voltages necessarily mean uneven capacity and/or to what extent?

Here's some tap voltages illustrating these points.

1) The first set is at the end of a grid charge (still loaded at about 250mA) that should have pushed all cells to about full. I estimated that some cells would have been overcharged by something like 30%.

2) The second row is the next day before a drive (~12 hours after GC).

3) And the third row is a few days later after draining the pack down to about 16% true state of charge. Most of that drain happened on day 1 (about -5000mAh).

1) 17.54,41,44,44,38,31,38,55,33,40 174.3V 68F low 17.31V, high 17.55V, spread 0.24V
2) 16.73,67,69,71,61,56,62,75,58,63 166.6V 50F low 16.56V, high 16.75V, spread 0.19V
3) 14.91,90,89,89,91,91,92,94,92,91 149.1V 39F low 14.89V, high 14.94V, spread 0.05V


You can see how row 1 and 2 values are pretty widely varied. But after a drain to near empty, the voltages are quite close. This suggests that the cells are pretty well balanced capacity-wise: I'm able to drain the pack low and the voltages are close. So, despite the quite uneven voltages at end of GC and even after 12 hours rest, the cells are pretty even in terms of capacity.
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #107
I've been mulling over some ideas the past several days or so, vaguely revolving around low charge state usage vs. high charge state usage, potential benefits or detriments, self discharge, 'low end' electro-chemical structure, possible differences among the various types of cells out there, such as aftermarket vs. OEM and Civic vs. Insight, etc etc...

Was just reviewing a couple things that touch on the low-end structure part of this and thought I should post it here in one place...

'Low end' structure is my own terminology as far as I know that has to do with the idea that reactions other than the normal NiMH cell reaction can take place - because there's other stuff in the cell. The 'normal reaction' on the positive electrode, the one that gives us juice, is the translation of NiOOH (nickel oxy hydroxide) to Ni(OH)2 (nickel hydroxide) during discharge, which happens at about 1.25V, or vice versa during charge. If you look at charge and discharge graphs you see they take place around 1.25V across most of the charge state range, i.e. there's a more or less flat charge/discharge voltage 'plateau'. I guess I'd call that the 'normal structure'... There's other plateaus that happen below 1.25V, so those I call 'low end structure'...

In the past, we've talked about voltage depression and memory effect and how build-up of some other substance or whatever creates a 'secondary voltage plateau', which supposedly happens at about 0.78V. This would be part of that theoretical 'low end structure'...

In the past, after I ultra-deep discharged cells I logged some voltages during the rebound. I'm pretty sure the plateaus that occur during this rebound period show us that different reactions are taking place; after the discharge load is removed, after the negative and positive electrodes are no longer connected via a resistance short, the 'chemistry' tries to equilibrate - basically stuff reacts and reorients so that charges and what-not balance out... Here's what those graphs look like (hourly time on x-axis mV on y-axis). I think this is, like, a picture of 'low end structure':

84231


So, there's plateaus at about 0.6V, 0.78V, and maybe 0.9V. These graphs don't show it, but there's also a plateau at around 0.2V. The exact voltages don't matter much here; just the idea that there are different plateaus is important, along with the suggestion that these plateaus reflect different reactions, different elements or 'additives' perhaps in the cell formulations, or perhaps different conditions, different degrees of reconditioning, deterioration, etc...

A couple other pieces in this puzzle of sorts: when OEM cells self discharge for a long period, or perhaps if they have fast self discharge, the discharge usually peters-out at about the same voltages, at least it will stay there for a long time and not go much lower for a long time, usually. For instance, I have some sticks I worked with some months ago, each with a fast self discharge cell, and I checked their voltages yesterday. All the fast self discharge cells are at 0.843 to 0.881V. In the past with sticks that sat for the year + time period, I've seen like 0.68V, and usually no lower then 0.48V...

I had one aftermarket stick that sat for a while and at least a couple cells were at zero volts. I've never seen that with OEM... This plus other things suggest to me that aftermarket cells might have a different low end structure than OEM cells...

Finally, a long time ago I emailed a battery researcher with a question about these fairly stable voltages to which self discharge takes cells. Here's a part of his reply:

"The steps occur when either the NiOOH cathode or MH anode run out of charged state. Then some other couple controls the voltage. For example, if all of the NiOOH is reduced to Ni(OH)2 but the MH electrode still has some hydrided material, the voltage will drop fairly rapidly from 1.2V to less than 1V until a new couple controls the voltage, for example CoOOH/Co(OH)2 [cobalt] or perhaps Ni(OH)2/Ni.... Also NiMH batteries have changed a lot since I worked on them, particularly in terms of self discharge behavior which used to be 15% in two days or 10 days but now can go something like a year. The new materials in them will create different patterns of self discharge plateaus."

So, I think I'm pretty convinced that different formulations cause different plateaus, and that logs of rebound voltage after an ultra-deep discharge can reveal these plateaus. I think that's about it. The longer term questions would have to do with getting more precise, like the differences between OEM and aftermarket cells, with what 'additives' do what, what additives/what reactions cause what plateaus, what forms of degradation might change the plateaus/the low end structure, and so forth. Not that I'm actually trying to answer these questions in some super-focused way. I guess it's mostly just a curiosity...

Oh, here's one more bit of info I copied from some website, on the role of cobalt in NiMH cells, not all that informative:

"Cobalt’s role in NiMH is in the nickel electrode. During charge, the cobalt is oxidised to cobalt hydroxide. Cobalt remains in the Co3+ form during discharge providing a reserve capacity to the MH electrode in doing so. On average, Ni-MH batteries contain around 4% cobalt." Nickel Metal Hydride
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #108 (Edited)
Here's an interesting voltage-related tidbit:

I've mentioned recently in this thread or another about how I'm pretty sure the BCM has a 'slope detection algorithm'. The idea is that, by monitoring the steepness of the voltage change slope during discharge, the BCM is able to detect when a single cell is 'dropping out', i.e. nearing empty, while others remain charged. This would allow the BCM to prevent cell reversals or too-deep discharging of single cells.

Among things I'm watching during my 'rock-bottom' charge state usage, today I measured tap voltages while in autostop at a 1.1 amp discharge load, right on the cusp of a neg recal (i.e. a cell dropping out, an effectively empty pack). Looking at what I measured and stuff, I estimate that the 'slope detection algorithm' is able to stop the discharge when a cell drops to about 1.0V - at least at this low discharge current.

Basically, I know I have a relatively fast self discharge cell and that it's triggering neg recals, and I know how charged all the other cells are. So, I can calculate how low this one cell is compared to the others simply by comparing tap voltages. The tap voltage to which this fast self discharge cell belongs was about 0.23V lower than the others, and all of that discrepancy can be attributed to the one fast self discharge cell. At the slope-detection neg recal trigger point, all other cells were at about 14.59/12=1.216V; this fast self discharge cell was at about 1.216V minus 0.23V = 0.986V.

edit: Here's some additional data. I did this some weeks ago and the numbers were as follows: low tap 0.18V lower than others, most cell voltages at ~1.196V, fast self discharge cell voltage = 1.196V minus 0.18V = 1.016V at 'slope detection neg recal trigger point'.
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #109 (Edited)
....I estimate that the 'slope detection algorithm' is able to stop the discharge when a cell drops to about 1.0V - at least at this low discharge current.
Was just thinking about this last caveat. I'm trying to conceptualize whether it matters what current the discharge is happening at -- I'm thinking it doesn't matter. Based on observations while driving it doesn't look like there'd be a difference. If I'm humming along at say 20-30 amp discharge, the BCM will detect a near empty cell at a charge state maybe about 5% higher than it does at 1.1 amp autostop discharge. I'm trying to wrap my brain around the actual calculation taking place, or maybe not 'the calculation' but its impact/effect...

I'm thinking that if an actual calculation of the discharge curve slope takes place and is used to determine "near empty," then it seems like it shouldn't matter whether the discharge current is 1.1 amps or 20 amps or whatever, that it will always happen at about the same cell voltage, all else being equal.

The choice of this slope's value should be strictly based on the discharge voltage characteristics of the Insight NiMH cell, and one of those characteristics is a quickly, increasingly negative steep slope when the cell is near empty, at around 1V. So, whether you 'fall off' that slope at 20 amps or at 1.1 amps or whatever, it will tend to represent about the same voltage...

One or another caveat to this, though, is that the cell voltage at which detection happens would probably be messed up to some extent with messed up, voltage-depressed or high internal resistance cells, probably more for voltage-depressed cells. For instance, I've seen plenty of cells with very ill-defined slopes nearing empty; they don't tank suddenly like a normal cell, but rather, voltage often just kind of peters out linearly. My guess is that cells in voltage-depressed packs would probably be driven lower than the BCM would like to drive them, say, lower than 1V - simply because the voltage discharge slope near empty never gets steep enough soon enough. On the other hand, it's likely some other 'algorithm' would come into play before a single cell like this got driven 'too low', such as total tap voltage just getting too low first...

edit: I happen to have a couple graphs on my desktop that can be used to illustrate the difference between voltage depressed cells (pairs of cells, at least) and more or less normal ones. These are 19.5 amp discharges...

This first image shows normal cells. You can see how the curves tank pretty quickly near the end of discharge:
84261


This second image shows voltage depressed cells. The curves across the whole charge state range are lower, yes, and linear, but what's important here is that that behavior, that pattern, continues all the way to empty. If the BCM has a specific slope value that it uses to detect empty, I'm not so sure it'd be able to spot when one of these cells nears empty, or at least it'd probably be much later than it's supposed to be, at a lower voltage:
84262


And, BTW, here's an image of the above cells after a tap-level ultra-deep discharge, which corrects most of the voltage depression:
84263
 

·
vote 4 mawah
Joined
·
288 Posts
observing the graphs plotted horizontal lines is a bit confusing to say the least. I'm not sure why the graphs are plotted the way they are, put trying to visualize along the horizontals is an exercise.

I see what I believe you / eq1 are looking at as far as the individual cells in the 6 cell sticks and how they drop off in separate curves at different rates during the drop off as shown in the second graph when comparing it with graph 1 and 3. And if this is the only indication being looked at in this stick of 6 cells than that should be obvious to anyone looking at the three graphs.

The things I'm looking at are what the other parts of the graph are showing and how they get plotted. ( Since the one 6 cell stick discharge curve on the graphs - Capacity (mAh) - and - Time - looks (in overview ) much the same as a pack discharge curve on the GCO1s' LabView upper graph of voltage and time ).
The thing is the graphs are not drawn consistently on either the Capacity (mHa) scale and the Cell 1-6 scale on the right. It's not a very big difference and unless comparing the high discharge curve on the horizontal graph lines might not be noticeable

Example: start at 5000 mHa Capacity of each graph and notice it's relative position to the graphs horizontal line. Each of the three graphs are drawn differently in that respect.
Example2: also look at the Cell 1-6 side of the graph and it's drawn differently on graph 3.
Not a huge thing, but it make visualization and horizontal plotting of the inverse slope ( The darker line with the squiggles in it starting at 10000 mAh Capacity and rising to the 25000 mAh line in graph 1.

Example2 is plotted on the GC01 LabView second bottom graph and shows a dual slope which MikeD commented on paraphrasing "there is a diminishing return of evaluation with the dual slope".. I believe the meaning being there is benefit to the dual slope graph when it's understood, although getting an idea of what the basic single charge and discharge slope indicate is of more importance to users.

Final thoughts: I'm glad the time and Int Est Temperature are consistent in the three graphs or I'd probably bee pulling out clumps of hair like with some other graphs I've seen with little or completely unknown definition(s) of axis' scales
And the pack tap plug on the HCH2 not only reads stick pairs, but also reads both pack halves and two three and / or four stick pairs together as well as full pack voltage. Not sure how any of that would be computed by the BCM / MCM / ECM without access to the HDS both before and after the IMA update.
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #111 (Edited)
^ Not sure I really understand the gist of your gripes. In general it sounds like you're wanting to read more from them than what I mainly intended them to show. The important comparison is simply the steepness of the curves at the end of discharges for graphs 1 and 2. Graph 1 for normal, good cells has a super steep slope at the end, and I imagine the BCM's slope detection value is for a steep slope like this. So, if the BCM tried to apply that steep slope value for voltage depressed cells, to determine when a cell is empty, it probably wouldn't work. That's what's illustrated with the second graph. At the end of discharge the curves' slopes aren't very steep, not nearly as steep as the slopes in the first graph. That's it. That's the main point...

In any event, here are some comments to your post...

observing the graphs plotted horizontal lines is a bit confusing to say the least. I'm not sure why the graphs are plotted the way they are, put trying to visualize along the horizontals is an exercise.
The horizontal lines correspond to the 'cell1-6' values, voltage. There are no horizontal lines for the other variables. Each colored curve represents pairs of cells.

I see what I believe you / eq1 are looking at as far as the individual cells in the 6 cell sticks and how they drop off in separate curves at different rates during the drop off as shown in the second graph when comparing it with graph 1 and 3. And if this is the only indication being looked at in this stick of 6 cells than that should be obvious to anyone looking at the three graphs.
Reading between your lines, yeah, this is mainly what the graphs were supposed to illustrate.

The things I'm looking at are what the other parts of the graph are showing and how they get plotted....The thing is the graphs are not drawn consistently on either the Capacity (mHa) scale and the Cell 1-6 scale on the right. It's not a very big difference and unless comparing the high discharge curve on the horizontal graph lines might not be noticeable....
The only discrepancy is the cell voltage axis in graph 3, which has a lower bottom value. In this context it's negligible. You can compare the 10 minute mark voltage values, for instance, and see that graph 2 is about 0.1V lower than graph 3...

The capacity axes on the left don't need to be exactly the same: we want to see the different totals at the end. Since the time axis is the same for each chart and the discharge current is the same, the capacity increments are also the same - except for the top value... In general it'd be better to use time OR capacity on the x-axis, but as I recall it can't be done with the software, or it's too cumbersome to deal with.

Final thoughts: I'm glad the time and Int Est Temperature are consistent in the three graphs or I'd probably bee pulling out clumps of hair like with some other graphs I've seen with little or completely unknown definition(s) of axis' scales
fyi, the temp axis, black curve, shows temp for a single probe on one of the cells. fyi, all the labels are defaults in my logging/graphing software, it's too cumbersome and not important enough to make changes.
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #112 (Edited)
So, back to this BCM 'slope detection algorithm' idea... I left my car in autostop again, right on the cusp of neg recal (empty cell), and checked tap voltages. I find that this might be a quite easy way for lay folk to spot which tap contains a/the wayward cell.

For example, here's my two sets of tap voltage readings. The first is right before the neg recal and the second is right after, or pretty much during, the neg recal:

1) 14.43,15.13,14.49,15.16,14.38,15.10,14.66,15.10,14.35,15.10 147.4
2) 14.33,15.05,14.39,15.09,14.27,15.02,14.43,15.03,14.26,15.02 146.5

Now, I've messed with the taps via tap discharging so voltages are uneven based on that. Here we're just focusing on taps 1,3,5,7, and 9. If you subtract values in row 2 from values in row 1, for these taps, we get these values:

-100,-100,-110,-230,-90, in mV.

The "-230mV" tap has the cell that's causing the neg recal, it has the empty cell.

edit: hey, just thought of something. Given my wacky uneven voltages, and the absolute low voltages for some taps, plus just me knowing that I have a fast SD cell in tap 7, I think we can be reasonably assured that indeed there must be slope detection taking place.

The voltage difference between high and low taps is pretty big, like 0.7, 0.8V - but that's not causing the recal. And, voltage on the lowest tap is pretty low - 14.26V. But that's not what's causing the neg recal. Tap 7 has a higher voltage than the lowest tap, at 14.43V. What's key is the -0.23V change, i.e. the slope.
 

·
vote 4 mawah
Joined
·
288 Posts
eq1 thanks for explaining the graphs mentioned in more detail. Please accept my apologies if my post sounded like I was complaining. It's obvious I was mistaken with regards to some of the data points shown and how the scales correlated. Also thanks for correcting my mistake in respect to the dark temperature line.

With your response above I see more detail and understand much better how the drop off curve in graph 2 shows cell pairs dropping off at a much different rate and sequence than the other two graphs..
 

·
Moderator
Joined
·
7,185 Posts
Hmm.. What makes you so certain that it isn't your lowest voltage tap that is causing the negative recal?
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #115
Hmm.. What makes you so certain that it isn't your lowest voltage tap that is causing the negative recal?
I've been 'doing neg recals' like this for about a month now, with slightly different conditions some times, such as different tap voltages. Basically, I know I've got a relatively faster self-discharge cell in tap 7, and every time I've done this little test it all happens the same way. This last installment just happens to include some stark/extreme tap voltages/differences, but since it too has 'happened the same way', where Tap 7 has about the same voltage change at neg recal as the previous couple trials, for instance, it's pretty clear that those others parameters - low absolute voltage and tap-to-tap voltage differences - aren't the ones that cause the neg recal...

If you scroll up to post 108 that's where I start talking about this, and include a few numbers, such as the same -230mV change at neg recal that I see in this last trial...

Here's a set of data and some methods... I started measuring twice a few days ago, B4 and after the neg recal, those are a little more informative. But even so, you can get about the same info from just one set of tap readings on the cusp of neg recal...

A couple weeks ago I recorded these tap voltages on the cusp of neg recal, it probably neg recal-ed while I was taking the readings. This is in autostop at ~1.2 amp load:
14.45,41,44,43,40,38,17,43,37,36 143.5V

Tap 7 is 14.17V, others about 14.35V, a difference of -0.18V. All that difference is attributed to the one faster-SD cell. We estimate that all cells are at about 143.5V/120 cells =1.20V -- except for the FSD cell in tap 7, that's 1.20 - 0.18V = 1.02V.

In this trial tap 7 has the lowest voltage. At this point we can't really rule out that tap 7 having the lowest voltage or crossing a low-voltage threshold isn't the cause of the neg recal... Although I didn't write it down, I probably checked voltages B4 the neg recal, so I know there was an actual drop, not just a low voltage.

Then, some days ago, I did the same thing - autostop discharge on the cusp of neg recal, ~1.1 amp load, and I measured tap voltages before the neg recal and after:

B4: 14.81,79,81,83,77,77,69,77,76,74 147.3
Af: 14.66,65,67,68,62,62,36,62,61,59 145.9

Here we can see that tap 7 has dropped by 14.69-14.36=0.33V, basically in the time it takes to measure 10 voltage taps twice and when the neg recal is happening. Some of that drop is in the other cells, most in the FSD cell. We can apply the same math to this trial as the previous to estimate the drop of the FSD cell that causes the neg recal: The difference between tap 7 and the others at neg recal is 14.36V minus 14.59V = -0.23V, all cells except for the FSD are 14.59V/12=1.216V, the FSD cell is 1.216V minus 0.23V = 0.986V, at neg recal...

Here's the voltage change of each tap during the B4 and After neg recal interval:
-0.15,14,14,15,15,15,33,15,15,15...

Tap 7 again has the lowest voltage (14.36V), so we still can't rule that out. But, with these data we can actually see the voltage drop happening at the same time as the neg recal, instead of inferring it from the circumstances, etc...

Finally, the last trial includes uneven tap voltages and tap 7 NOT having the lowest voltage at neg recal:
B4: 14.43,15.13,14.49,15.16,14.38,15.10,14.66,15.10,14.35,15.10 147.4
Af: 14.33,15.05,14.39,15.09,14.27,15.02,14.43,15.03,14.26,15.02 148.0

Tap 7 drops by 0.23V during the B4 and after neg recal interval, about the same as the other trials. Other taps drop only by about 0.1V...

Tap 7's voltage drops and I get a neg recal, once again, while I know there's a fast SD cell, i.e. 'it's all happening the same way' during multiple trials. Tap 7 doesn't have the lowest voltage, that would be tap 9, so I think we can rule that out as the cause of the neg recal... Same with the uneven voltages...

I think technically I'd need timed-logged data and a statistical test to show that the probability of tap 7's voltage drop and a neg recal coinciding can't be due to chance. I'm just not that into it... It's not only these 3 trials, though, that this is based on. I've done at minimum 20 neg recals over the last month (probably more like 30 or 40), so you just kind of get a feel for what's going on, even if I didn't do the auto-stop thing and measure tap voltages at exactly these intervals (I have measured tap voltages every day though for the past month or so)...

Doesn't it seem pretty clear to you, that the BCM has a slope detection algorithm and that that's what causes most if not all bonafide 'neg recals'? On the other hand, it does seem like a tap resting voltage at or under 14.4V can cause a neg recal, just that it's not the case in these trials here, or rather, since there's a load (in autostop, ~1.1 amps), the voltages I'm measuring aren't resting voltages...
 

·
Registered
Joined
·
309 Posts
My battery taps are a lot tighter than what Eq1 is seeing. I've been monitoring my ten tap voltages as I drive. The taps are typically only 3 tenths or less from the highest tap voltage to the lowest. On rare occasion, I've seen a five tenths voltage differential, but the next scan of the voltages shows them back to being within one or two tenths (the most common differential). The taps are now sampled once every minute, so the battery is scanned about 30 times on my commute into work, and another 30 on the way home. (I was sampling at a shorter interval, but did't find frequent sampling to be necessary...kinda like checking your oil pressure gauge every minute when nothing's wrong).

I haven't seen a re-cal since I started monitoring. I'm trying to figure out what the re-cal threshold might be as well. Based on Eq1's observations it looks like maybe a 3% voltage drop from the pack average. While driving, the biggest tap voltage differential I've seen usually occurs when the cells are being drawn down from load.
 

·
Registered
Joined
·
6,837 Posts
Discussion Starter #118
My battery taps are a lot tighter than what Eq1 is seeing. I've been monitoring my ten tap voltages as I drive. The taps are typically only 3 tenths or less from the highest tap voltage to the lowest...
What numbers of mine are you looking at? Lately I'm doing experimental stuff so most of the tap voltages I've posted aren't representative of what they'd 'normally' be. Plus, I've never looked at tap voltages under load (except for the meager few amp auto-stop variety)... Does "3 tenths" mean, like the difference between 14.5V and 14.8V? If so I'd say that's probably a bit bigger than the spreads I'd normally see, based on what I've seen working with sticks and cells and stuff...

I'm trying to figure out what the re-cal threshold might be as well. Based on Eq1's observations it looks like maybe a 3% voltage drop from the pack average. While driving, the biggest tap voltage differential I've seen usually occurs when the cells are being drawn down from load.
Well, what I've described above suggests that most 'neg recals', bona fide neg recals, at least, are caused by a cell becoming empty before other cells, and that empty point is determined by the steepness of the tap voltage slope.

I.e. at time 1 tap voltage is say 14.80 volts under load, and at time 2 it's 14.50 volts, where the time interval is very small, like seconds. So, let's say this change happens in 2.3 seconds: the slope would be (14.50V-14.80V)/2.3 seconds, or -0.13V per second...

In my examples/trials in previous posts I'm seeing like 0.2-0.3V drops at neg recal. If you were looking at tap voltages, not pack voltages, that voltage drop represents 0.25V/~14.4V = 1.74%; at the pack level it'd be even smaller, only 0.25V/144V = 0.174%... Basically, you're not going to be able to see a cell going empty ('dropping out') looking at aggregates while you're driving, certainly not at the pack level, probably not even at the tap level...

Also, as far as I can tell, the BCM triggers a neg recal when a tap doesn't rebound above 14.4V, i.e. when resting voltage is below 14.4V. I often feather the throttle/assist/regen, when I'm messing around, testing stuff, to keep resting voltage above 14.4V (well, pack resting voltage above 144V) because if I don't I'll get a neg recal. In my book it's a pretty solid indicator of an empty pack. The thing is, I'm looking at pack voltage and it's most likely a tap thing: if your cells/taps are imbalanced, then you can get this kind of neg recal even though pack resting voltage is well above 144V - because one tap is below 14.4V while others are higher, hence total pack voltage will be higher than 144V. This is actually a strong indicator of a balanced pack: IF you can discharge your pack to a resting voltage of about 144V and NOT get a neg recal, your pack is balanced...
 

·
Registered
Joined
·
824 Posts
Are you able to see the stick pair voltages in real time? What we need collectively is serious data logging (100 Hz stick pair samples, ISOC and temps in real time, xSCI messages at full rate, and actual current readings, and ideally key OBD messages if not real time sensor data.)

I no longer trust the IMA dash meter to tell me what how much current is really being supplied/drawn, and I wonder if some of the stuff the IMA does when it senses a pack that's "out of whack" isn't maybe insulting to cells (pulling or pushing too much current above 90% or below 10% cell capacity.)
 

·
Moderator
Joined
·
7,185 Posts
The dash meter is a kW meter. Full scale is 6.5kW.

But yeah, the car can draw up to ~12kW for up to ~4 seconds at a time, so it doesn't tell you all....
 
101 - 120 of 215 Posts
Top